Building social, political and technical infrastucture

In a wonderful show of support for the right to freedom of expression on Sunday, world leaders descended on Paris to participate to march in solidarity with the people of France.

This event would have been even better if the leaders in question were not so selective in their support for freedom of expression. As Daniel Wickham pointed out in a long sequence of Tweets, many of those attending the march have in recent years overseen the imprisonment, torture or murder of journalists.

This of course makes for a pretty weak gesture. When Benjamin Netanyahu, Sergey Lavrov, and Ahmet Davutoğlu march in support of free speech, they are disrespecting everybody who has died for their right to do so. But rather than dwelling of the hypocricy, it’s worthwhile to consider how to move beyond it.

The first order of business would be to release political prisoners. These would include journalists such as Aziz Kayed, Ahmad al-Khatib and Mustafa al-Khawaja, Hatice Duman, Mustafa Gök and Cüneyt Hacıoğlu; Sergei Reznik and Aleksandr Alesin; the staff at the Syrian Center for Media and Freedom of Expression; Abduljalil Alsingace, Ahmed Humaidan and Hussein Hubail. A broader campaign supported by the attendees of the march could lead to the release of Khadija Ismailova, Eskinder Nega, Jean Laokolé, Mohammed al Maqaleh, Muhammad Anwar Muna, Chelsea Manning, and various other political prisoners.

Once this has been done, the universal adoption of new standards and practices around freedom of expression would be a good move. This could include such radical concepts as eliminating imprisonment as a punishment for libel, decriminalizing blasphemy, and repealing any laws which restrict media activity. This could mean a sane libel law in the UK, the liberalization of religious speech in Ireland, and the elimination of the media censorship committee in Hungary, among other great things.

The next appropriate step would be to put in place appropriate safeguards to protect the global communications infrastructure from attempts at censorship and selective availability. This could be done by enshrining the principle of network neutrality in law, repealing existing Internet censorship laws, and reforming copyright to prevent DMCA-style notice and takedown procedures from being used as a general purpose censorship device.

The are of course many other things which need to be done, but this would be a good start. How about we set some deadlines? We could agree for instance to release the prisoners by the end of January, complete the legislative reforms by the end of 2015 (it is understandable that these things take time), and then perhaps these righteous world leaders could meet again in Paris in January 2016 for a summit on where to go from there.

Since the Snowden revelations started to inform the public about the ways in which western governments have been spying on everybody, a number of international diplomatic relations have soured, and many relationships between governments and their electorates have soured.

The actions of the governments of these countries have rendered them entirely untrustworthy. Their only avenue to regaining trust is to dismantle military-surveillance artefacts that are not physical, cannot be visually accounted for, that exist in a post-scarcity economy, with no meaningful limit to how many surveillance systems can be in place and no way of counting them. It is impossible to prove that this has been done. We must therefore hereafter assume that it is going to continue forever.

I publish this now in the context of a mass murder perpetrated against a group of journalists, in the name of religion. This is a terrible deed, but it is no more terrible than some of the reactions: it is almost as if certain fascist political actors are rubbing their hands together in glee over the atrocities, for events like these can be used to lend credence to downright disgusting political agendas. Marine Le Pen of course is overjoyed by the inevitable spike the Charlie Hebdo shooting will cause to her party’s following — as Bob Altemeyer pointed out in his book The Authoritarians, people seem spring-loaded to become more authoritarian in times of crisis. But when a French political party leader, even a fascist one, calls for the reuptake of the death penalty, then it is high time for everybody to become very very afraid.

This is a transcript of a talk I gave in Warsaw in September 2014, where I discussed some of the problems that make blanket surveillance easy, some of the possible approaches to eliminating broad state surveillance capacity, and put that into the larger geopolitical context of ongoing international information warfare. This was a continuation on a series of previous lectures, consisting of “Where States Go To Die” (SHARE 2013), “Engineering Our Way Out of Fascism” (FSCONS 2013), “Humanity Scale Information Security” (NullCon 2014) and “The Political Implications of Technology” (Digital Activism Now, 2014)

6. Surveillance is Easy

When the cold war ended, suddenly a generation of people, whose primary role had been to defend against an indeterminate adversary in a war that never happened, were put into the worst possible situation for them. Peace. Relative peace, interspersed with small conflicts, but the entire logic of the nuclear bureaucracy was upended, and all the skill and talent that had been built up since the end of the second world war was suddenly rendered unnecessary.

All those idle hands. And yet, like any other artefact of military superiority from a bygone age, it was repurposed. Unlike, say, the fort at Komárom or the military base at Christiania, or the Roman roads, these people were not repurposed for the public’s benefit. They were put into various roles including policy advisory and research and development.

It is in those roles that hundreds or thousands of smart people with a Cold War mindset got into the peacetime business of preparing for the next big problem. Papers were written, drafts circulated, plans shaped. But people who are in the business of preparing for the worst aren’t very good at assuming good faith. So they came up with bad law proposals, and kept them in their rainy day boxes, just in case.

Meanwhile, a culture of fear was being cultivated. Cities were turned into panopticons. Buildings were fitted with cameras, and the cameras were fitted with face recognition software, and the face recognition software was fitted with databases containing everybody.

The overarching argument was at one point crime. Then it was drugs. Then it became terrorism. Terrorism.

When we call somebody a terrorist, we are pretending that their actions have no motives. That their only aim is terror. That there is no chance of any legitimate political argument or concern behind the atrocities. Ignoring the politics of the terrorist, and instead lumping them into vague demographics based on nationality or religion, serves two goals: First, to eliminate any chance of non-violent solutions to their political demands, and secondly, to expand the group of potential terrorists beyond a negligible group of extremists with a particular set of political demands to a large amorphous group of indeterminate membership, thereby justifying the encroachment of the civil liberties of everybody.

Then, of course, it isn’t just cameras. The state security services are staffed with smart, dilligent people, who have been working hard on protecting their nation state from all of the indeterminate enemies. Because they’re smart, they know that you cannot fight your enemy without knowing your enemy. Unfortunately, they’re not smart enough to recognize that an enemy whose membership is intentionally, through willful ignorance, made to be indeterminate, cannot ever be known.

Thus the assumption that we must all be terrorists, and we must all, therefore, be known. Everything we do must be catalogued and understood. So our phones get tapped, and our Internet monitored. Our e-mails get read by machines and filtered through stupid, inaccurate computational linguistics models, slapshod statistical methods. Our passenger name records get analyzed for patterns. All of the data produced through the course of our increasingly interconnected lives are shoved through a pipeline of quantifications.

The state wants to find the outliers, and line them up against the wall. Fear isn’t cultivated because it’s fun. It’s cultivated as a means of manufacturing compliance, regardless of how insane the rules are.

In case you missed it, we live in a world of ubiquitous surveillance now. Information warfare is being perpetrated against us.

Surveillance is easy because ignoring the politics of minorities is easy. Surveillance is easy because accepting the bent logic of the state is easy. Surveillance is easy because the post cold-war nuclear bureaucracy got bored.

5. You are making Surveillance Easy

So one might say “down with the state,” with no plan for replacement, as if nihilism had any chance of improving our situation. It does not. Not only because there are an unknown number of devices spying on our activities, and not only because there is no way to find out where they are, and guarantee that we’ve turned them all off, but also because we willingly and actively submit ourselves to the surveillance.

You are carrying a device in your pocket that constantly keeps track of where you are, and reports it back to its overlords — the phone company. The phone company also keeps track of who you call and when, and for how long, and who you message, and which websites you visit, and in which order. The phone company dilligently complies with the demands of the state. If you are in Poland, they reported you to the authorities over a million times last year. The Stasi were never that efficient.

But it gets worse. You may use Facebook, or Twitter. You might use GMail or Yahoo for your e-mail. You might use Dropbox for your files, or iCloud maybe. These systems not only spy on you, but they aggregate your information and sell it to the highest bidder. And the second-highest. And the third. Actually, one of the most common business models of the cloud is to sell your data to everybody who wants to buy it. How do you think Facebook makes money? Do you think they’re allowing you to post pictures of your lunch or observations about the weather out of the goodness of their hearts? There is, to date, little evidence that the people running Social Surveillance Networks have hearts.

Cloud providers, as they are called, do of course have privacy policies, where they make vague promises not to harm you. But the definition of harm is narrow, and the scope of potential harm is broad. When you choose to put your data in the cloud, you are choosing to risk that it might rain. They can promise it’ll never rain, but the rain still comes, as many celebrities became profoundly aware of last week.

But it gets worse: even if you don’t use GMail or Yahoo for your e-mail, there’s a high probability that your friends do. When your friend uses a centralized e-mail service, they are exposing your activities to these companies, who may then report it to the state. When your friend uses GMail, your friend is reporting you to the authorities. Automatically. The Stasi were never that efficient.

When we choose to use Social Surveillance Networks, we are choosing to allow people of dubous moral fibre, with awkward relationships with governments, to keep track of us. And yet we can’t stop using Social Surveillance Networks any more than we can stop breathing: it is how we communicate now.

The only thing we can do is to be very clear about what is permissible, and what not.

You are making surveillance easy by not being clear about what is permissible. You are making surveillance easy by accepting the bent logic of the Social Surveillance Networks. You are making surveillance easy by using the cloud. It will rain.

4. We made Surveillance Easy

So one might say “technology should protect us,” ignoring entirely the political implications of technology. Technology is neither good nor bad, nor is it neutral, as Melvin Kranzberg has pointed out.

There are two ways to enforce any rule: enforcement by policy, and enforcement by design.

When you enforce a rule through policy, then the rule is kept as long as the policy is not changed, and nobody violates the policy, and nobody forgets to enforce it. It works well while everybody is playing nice.

Getting everybody to play nice is a bit like getting everybody to eat their vegetables. Most people will do it, because they know it’s good for them, but some people will refuse, because you know, they just don’t like the taste of broccoli.

Enforcement by design is a different type of thing entirely. It is where the rule is built into the system, in such a way that the universe prevents the rule from being violated. Gravity is a rule that is enforced by design. Imagine what would happen if there were a gravity committee that met every Tuesday. There would be chaos. Thankfully, the universe is not governed by committees, and it is very good at making sure certain rules never get violated.

But the design still has to happen. To prevent surveillance, there are three methods:

Decentralization. It is harder to watch everybody when nobody is in the same place. When everybody goes to one place, we call it a single point of failure. If that point fails, everything fails. And if that point surveils, everything is surveilled. Facebook is a single point of failure for over a billion people now. Twitter is a single point of failure for about 600 million people. Skype is a single point of failure for another 600 million people. GMail is a single point of failure for at least half a billion people. Decentralized networks, by comparison, are pretty much impossible to surveil, and thankfully the Internet was designed from scratch to be decentralized. Unfortunately, a lot of the businesses on the Internet think that the only way they can make money is by building single points of failure. They made the technical decision to violate one of the most important design decisions of the Internet for their own gain, and we are all paying the price.

Encryption. Some mathematics are very easy to do but practically impossible to undo. This is important it allows us to send messages in secret. This is useful for banking, it is useful for commerce, but it’s also useful for political activism, or police activities, or keeping healthcare records safe. Encryption is, however, used very sparingly. Next time you visit a website, check if it says “https” at the top. If it only says “http”, without the “s”, then your communications are not encrypted. Unfortunately, HTTPS is hard to use, and it has many flaws, so most websites don’t use it. In fact, about 700 of the largest 1000 websites in the world don’t enforce HTTPS encryption. E-mail is even worse: in order to encrypt that, people are required to learn mystical magical incantations called PGP, and even those who have learned this horrible type of magic get it wrong every now and then. This is because PGP was never designed for normal people. It was designed by elitist technologists for use by elitist technologists, and for that we are all paying the price.

Hardening of computational endpoints. This is a bit more complicated, but generally what it means is that we need to write better software. Unfortunately, the common approach to software development is to make something that doesn’t work and then keep poking it until it does. If buildings were made the way software is, they would look ugly, stand at odd angles, and suddenly collapse. This isn’t just because software developers are bad at developing software, it’s also because software is hard. But long story short, most software is riddled with severe bugs that make surveillance easy.

The technical community created this mess, by making poor decisions and by valuing speed and profits more than stability or security. The greybeards who built the Internet created this situation because they had faith in the system, in the nuclear bureaucracy of the cold war era. When the guys with the shiny shoes came and told them not to build in encryption, they said okay, because they thought the government was their friend.

The technical community has a lot to answer for.

We made surveillance easy by pretending that a few big centralized services weren’t a problem. We made surveillance easy by making PHP and MySQL easier to use than HTTPS and PGP. We made surveillance easy by believing in the benevolence of the governments. We made surveillance easy by writing bad code. We made surveillance easy by not caring enough about people.

3. We cannot stop Surveillance

So one might ask, “how come the public has unintentionally conspired with governments and the technical community to eliminate privacy?” The answer is democracy.

I have so far not mentioned Edward Snowden, and have been working on the supposition that he needed no introduction. But let’s imagine a world where most countries operated on the principle that its laws were created by a group of people who were selected in a fair election by the adults in each country. These people also make executive decisions, such as managing roads and waging wars. They also decide who gets to be judge. Now imagine what would happen if these people decided to do something absolutely horrible, and never tell the public. How would we ever know? As long as the guise of democracy was maintained, we have no proof that they aren’t working for our benefit.

The only reason we know that the governments of this world have been waging war on us is because Edward Snowden told us. Oh, we had our suspicions, but we had no proof. And he gave us proof of activities being conducted against us that were way beyond anything we could have imagined. But, note, he only told us of the activities of the US and UK governments, and a bit about their Five Eyes partners. There still has been no Chinese Snowden, or Russian Snowden, or Indonesian, or Nigerian, or even Polish Snowden. There is an entire world of bad stuff being done behind our backs.

Stopping surveillance is impossible, because surveillance can happen without us knowing. Projects like PRISM and TEMPORA and Boundless Informant could, in theory, be defunded, but the technology already exists and can’t be un-invented. Even if the NSA were abolished, like the Black Chamber was in the 1920’s, the technological artefacts won’t be dismantled because there is no way to prove that they have been dismantled. You can’t dismantle a piece of software, you can just stop running it. But there’s no way to prove that other people aren’t still running it.

Moreover: abolishing the NSA would do nothing to reduce the capacity of the FSB, or GCHQ, or the BND. The US may be attacking us more than everybody else, but that doesn’t mean the others aren’t attacking us.

Since we cannot stop surveillance, we must learn to live with it. We need to learn to live in a world of perpetual information warfare, where states attack each other and all of them attack us. But that does not mean that we need to accept surveillance, or make it easy, or even allow the surveillors to get away with it. Not at all.

We cannot stop surveillance, but the good news is we don’t have to.

2. We can make Surveillance Expensive

The best thing we can do in this situation is make surveillance prohibitively expensive to maintain. In order to do that, we need to be very serious about our demands. We must demand decentralization, strong encryption, and hardened endpoints. But we must also demand political accountability.

Making surveillance economically expensive will reduce the activities of the surveillance agencies. Making surveillance politically expensive will reduce the activities of the governments and the corporations.

My current estimation of how much it costs to monitor everybody is about 25 cents per person per day. It’s a rough estimate, gotten by taking a rough estimation of the budget of the largest surveillance alliance, the Five Eyes, and dividing that number by the number of people who use the Internet. It’s changed a bit over the last year: I estimated it as being around 13 cents per person per day back when Snowden first revealed this activity to us. Since then, more and more people have been adopting strong encryption, even though it’s hard, people have made greater demands of security, and things have gotten a little bit better overall.

1. Surveillance does not happen in a (Political) Vacuum

Surveillance serves political ends. The objective is control, and we are the controlled. The logic of government is the logic of normalization. Only that which can be seen can be normalized. We must always be watched. If we are not watched, government cannot work.

This has been true throughout history. Surnames were created to give authorities a better understanding of who was who, so that people could be catalogued and taxed. We have passports and ID cards, so our flow can be controlled. Biometrics are becoming more and more popular. As technology has developed, the capabilities of humans have expanded, but so have the needs of the state to have perfect visibility.

That visibility extends not only to citizens of the state in question, but to all citizens of all states. In particular, those citizens who wield political power. Historically, those people are the kings and the presidents, but also the parliamentarians, and the state officials, and so on down. But now, for all the faults of the Social Surveillance Networks, they are facilitating greater communication, which is lending more political power to the public.

Surveillance is a weapon. We are, as a species, engaged in information warfare. Bellum Omnium Contra Omnes, Hobbes said, the war of all against all, could only be avoided if there were strong centralized governments. Because, he said, humans are not angels, and we cannot be trusted. As it happens, governments are not angels either. And those with much power can be trusted even less than those with none.

0. This is a Cold War that can Never End

I’ve been calling this information warfare, but the question remains whether this can be called a war at all. I posit it can: the Internet Engineering Task Force has defined pervasive surveillance as an attack. When person attacks a person, we call it a crime. When a state attacks another state, we call it war. When a state attacks its own people, we call it a civil war — no matter how uncivil it is. Incidentally, when the people retaliate against states, it is called terrorism.

But when nobody dies from the war, and instead of broken houses and broken lives we simply live in constant fear, we call it a cold war. This is a cold war, but it’s not like the last one. In the past, the nuclear bureaucracies of the world were engaged in a standoff against each other. Now, the old nuclear burueaucracies are engaged in a standoff against us. And we’re unarmed.

One of the most interesting documents generated during the cold war era was a document generally referred to as the Long Telegram. Written by George Kennan, it is the first document to suggest the US strategy of containment, whereby the USSR would be prevented from spreading its political influence or ideology, and would be allowed to rot from the inside until the point of collapse. It is effectively the cold equivalent of a war of attrition.

This is my Long Telegram. I am calling for a war of information attrition against those in this world who would seek to wield their power against the general population, in whatever form. This needs to happen on all levels, but the opening step involves rendering ourselves illegible to the surveillance state. It’s really easy: just be as confusing to the state as possible. Break the logic of the state. If it can’t understand you, it cannot fight you.

This was originally published at the Center for a Stateless Society on October 12th, 2013. It is a transcript of a talk I gave at the SHARE Boat Camp in Croatia in August 2013, on board the Galeb

Military Artifacts

All over the world, landscapes both urban and rural are littered with military artifacts from bygone times. These artifacts have completed their lifecycle as objects of power, force and control, and have either been repurposed or forgotten.

Repurposed artifacts gain new meaning in the world, as they take on new roles. The former military base of Christiania in Copenhagen became a self-organizing free town. In Keflavík, a former US Navy base was converted into a university. In Florence, a former juvenile prison was turned into a safe haven for human rights defenders. In many places, former strongholds with relatively little public value have become tourist attractions, such as the tunnels inside the rock of Gibraltar, the castle in Ljubljana and the fortress of Komárom.

Some of these military artifacts don’t need to be explicitly repurposed to retain public value. In Europe, roads built by and for Roman armies up to two thousand years ago still form many of the transportation backbones of the continent. Without roads, there could be no trade.

But as time has gone on, military artifacts have become less amenable to public repurposing. While we might find some potentially beneficial use for the odd warship, the NORAD facility in Cheyenne Mountain isn’t going to become a theme park anytime soon, and despite Arnold Schwarzenegger’s suggestions, thermonuclear devices cannot be turned into snow cone makers. And while it is also conceivable that some guy might come along one day and convert an ICBM into a spaceship for faster-than-light travel, I’m not going to hold my breath.

Nuclear Democracy

Nuclear weapons are interesting artifacts. It is a matter of public record that almost ten thousand Nuclear weapons have been constructed. How many were constructed outside of the public record is anyone’s guess. Where they are is also an open question. A Nuclear device belonging to the US military was found in the sea off the coast of Greenland a couple of years ago, and nobody could publicly explain how it got there. And that’s the US – a country that at appears to have at least a vaguely competent military and relatively stable political atmosphere. Consider the artifacts left behind from the USSR. Not all visually accounted for, I’d venture to guess.

It has been said that the Nuclear bomb is a fundamentally undemocratic device: It has widespread impact, it is unspecific as to which humans it harms, it is expensive to source materials for and complicated to build. While an Ulam-style trigger mechanism is really just a question of getting enough dynamite in the right place, plutonium isn’t something you can pick up at the next convenience store.

Contrast these to rifles: Easy to build, easy to use, limited range and action, fairly focused on a particular target, unless you’re using an AK-47, in which case the only serviceable objective is chaos. As such, they are a much more democratic form of military artifact. Although they cannot directly be repurposed beyond a certain degree, there may be legitimate use for them outside of warfare.

What unites all of the artifacts I’ve mentioned so far is that they are physical. They can be visually accounted for. They exist in a scarcity-based economy. There is an upper limit to how many nukes can be built here on Earth, there is a way of counting them.

And as determined by the START treaty, there is a way to dismantle them. Nuclear disarmament was a hotly contested and highly useful goal near the end of the Cold War, although the topic has somewhat fallen out of fashion today. It’s as if people have come to terms with the idea of certain people having the ability to wipe out all of humanity at the blink of an eye. After Obama first took office, he went and had a conversation with Putin about disarmament, but there hasn’t been much media followup since then. Are there fewer nukes now than there were five years ago? I doubt it.

But an ICBM is a relatively hard thing to hide. This we know in part because if Scotland gets independence from the UK, the net number of Nuclear powers in the world remains constant, although the identity of one of them changes: the UK’s Nuclear stockpile is for the most part poorly hidden in the highlands. So if we did at some point get serious about disarmament, we’d know where to go, modulo some degree of military ingenuity and political madness.

Utopian Indulgence

With nukes, there is an exit strategy. In recent weeks, we have been granted some rather disturbing insights into the world of surveillance. We have heard of Prism, Boundless Informant, Tempora, and other things, the goal of which is not to spy on enemies of the state, but to spy on everybody on the assumption that we are all enemies of the state.

Let us indulge in a utopian form of escapism for a moment and posit the possibility that US President Barack Obama were to appear live on all the networks tonight, terrestrial and satellite, and declare that these catch-all surveillance programs would be abandoned forthwith, that all of the collected information – several hundred billion database entries – and all of the surveillance equipment would be destroyed.

If the US government had any credibility left, there would be instant jubilation. Peace would break out and victory would be declared, of some kind. But this is not the case. The US government was already running on the fumes of its credibility by the time Chelsea Manning exposed a shocking number of war crimes perpetrated in full knowledge of the upper echelons of the US government, and in terms of credibility it sputtered to an unceremonious halt when it was exposed that they had for at least seven years been conducting massive pervasive intrusions into the privacy of hundreds of millions of people around the world, violations against the trade secrecy afforded to companies globally, and quite literal invasions into the sovereignty of possibly every country on the planet.

This is not to say that all parts of the US government are rotten – not at all. On the contrary, many people within the US government or working for it are decent people with good intentions: The existence of people such as Edward Snowden, Chelsea Manning, Thomas Drake and Bill Binney is proof of this. The problem is not with the people, as such, it is with the structures and the behavior those structures breed.

If we return to our indulgence, the onus on the US government in this situation is to prove that they have dismantled their surveillance systems. But how could this be accomplished? There is no easy answer.

Dismantling Realpolitik

One of the fundamental challenges is that the US has ratcheted up their security apparatus to a point where any loosening would be construed by some as backing down. There are countries which might conceivably wish to take advantage of any weaknesses. There aren’t a lot of avenues for reduction.

One might argue that there is a possibility for the governments – and let’s remember that it isn’t just the US government, there’s the UK, Germany, France and many others – to back out of this surveillance quietly without alerting their enemies. But that would be moot – the public would not know, and thus public opinion would not be mended, and therefore little real benefit would come of it.

The understanding here is that any action taken by any of these governments now that does not lead to a better informed public on the one hand, and better protected rights to privacy on the other, are not going to be sufficient. So what are governments to do? There aren’t a lot of options.

The Death of the Republic

We have reached an impasse. On the one hand, the actions of the governments of these countries have rendered them entirely untrustworthy. On the other hand, their only avenue to regaining trust is to dismantle military artifacts that are not physical, cannot be visually accounted for, that exist in a post-scarcity economy, with no meaningful limit to how many surveillance systems can be in place and no way of counting them.

This is a catch-22. But we have seen this kind of stalemate arise before, numerous times in numerous empires, and they always had the same result. Some issue of contention comes up, ratcheting to the point where there is no feasible outcome. Politics be damned, military action is sometimes taken. Sometimes, it’s not country-on-country action. It’s the public using all of those repurposed artifacts to their own ends.

I am deeply worried by this possibility. While the little anarchist in me would be happy to see these governments replaced, I very much prefer soft landings. The republic as we know it needs an exit strategy. This means a few different things.

A Motion for Rebirth

First, we need some new way of creating structural transparency on the protocol level. This is to say that the institutions which service us must be capable of exposing their activities directly to the public through a complete analytical mechanism. In practice this would mean that people are granted the capacity to be as well informed as they see fit.

Second, we need some new way of aggregating political will. This essentially means better collective decision making mechanisms, systems of direct democracy that allow everybody to express their social choices in a way that does not disempower them. Most direct democracy systems fulfill the requirement of allowing everybody to participate, but few fulfill the requirement of giving everybody a say. This needs to change, and until it does, there is no reasonable expectation that people will wish to participate.

The third thing is slightly more cumbersome, and more related to this discussion of military artifacts. The world’s political economy has been constructed over many centuries, imbued the logic of empire. If you take any artifact from the economy, physical or electronic, military or civilian, the chances of its creation having involved the exploitation of humans somewhere are near certainty.

We need to figure out – and here I have no boilerplate solution – new organizational structures that don’t require exploitation. I know, I know. Slightly slipping back into Utopia here.

New Logic, New Artifacts

The hard problems are kind of obvious. We’re all here because we know that they need solving. Some look to the people standing on this deck for guidance and leadership in these issues. The reality is, nobody has the answers.

What we do know is that the logic of our current societies does not lead to equality, democracy and civility. It leads to Prism, Tempora and Boundless Informant. It leads to GCHQ, NSA, and BND. It leads to Tito, Obama and Lukaschenko.

We need a new logic. This logic will only come about by the elimination of the existing states, the states that have rendered themselves untrustworthy by their actions against us. But as assuredly as the current system has generated the military artifacts of our time, the new logic will produce new artifacts, both military and civilian, and it is up to us to repurpose them to the benefit of everybody.

The Internet industries of America may just have inadvertently had their hats handed to them by the military industrial complex. Now it’s up to Europe to provide an alternative to the surveillance state.

Almost all of the major Internet industry giants are based in the United States. The reasons for this are historical and economical. The tradition of strong entrepreneurship practiced in the US since their inception, mixed with their purchasing power and history of acquiring any sufficiently profitable venture or fascinating technology from abroad, has put the US into a prime position to be the global leader in provision of Internet services.

That may just have ended. While US dominance over the roughly $11 trillion/year global Internet services market is still unchallenged, the damage that the revelations made about NSA’s vast global surveillance scheme may stymie their growth and perhaps even turn them into a localized recession in coming months and years.

The reason for this is Europe. While some Europeans are becoming increasingly comfortable with the notion of living in a surveillance state, most people on the European mainland still grow up hearing stories of totalitarian dictatorships, wars, genocides, and the Holocaust, and have a natural inclination to detest the notion of secret police. As more is learned of the US’s secret spying games – aided in part, it seems, by their English counterparts – outrage boils thickly in countries like France and Germany, where despite highly open and inclusive societies in some senses, the notions of privacy as practiced in the United States have often been thought of as quaint. While modern discourse on privacy is dominated by the philosophical foundations of the 4th Amendment, a slightly different, somewhat more subtle understanding of privacy reigns in European discourse, with an annoyingly elusive definition.

Over coming months and years, the US government’s betrayal of the people of the world will spur a new industry in Europe, not aimed necessarily at pure technological innovation, but rather simply creating secure, privacy-respecting alternatives to the software services provided by the US based companies that can no longer be trusted. We will see Czech and Hungarian startups bringing out new search engines and Croatian and Polish companies developing secure e-mail services. We’ll undoubtedly see surveillance-resistant chat software coming out of Austria and global map databases being developed in Estonia. Or something like that.

This is not to say that Europe is ready to take on such a massive task. There is a lot of soul-searching that needs to happen, both culturally and politically in Europe: while privacy is a shared value in most of the continent’s corners, due to the lingering fear of a return to totalitarianism – fueled in no small part by the ascension of the likes of Hungarian prime minister Viktor Orbán to power – there is still a phantom of apprehension in the interactions between the tribes that make up Europe that seems to foreshadow balkanization. On top of this we have a schizophrenic political class that speaks of free trade one minute and restrictions the next, amongst whom are those who get raging hard-ons at the merest mention of censoring pornography or anything else they find offensive or overly stimulating.

That said, this may well turn out to be Europe’s decade in tech, and all because the United States failed to heed an important and timeless warning: “We must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military industrial complex.” Eisenhower’s parting words to a nation being enveloped in a cold war were colder still, as a man who had seen a beast grow out of hand during his years in office was urgently pointing at the writing on the wall. But the years passed and the beast grew – premonitions turning to loathsome misery with each passing President who failed to stop the surveillance state.

And now, the military-industrial complex may have destroyed the US’s Internet-industrial complex.

Just as the last two thirds of humanity are preparing to transition into cyberspace, the NSA’s actions have revealed it to be far more of a Wild West than any government feels comfortable admitting. The rule of law breaks down really fast when there’s no clear monopoly on the legitimate use of violence. There are few acts as violent as stealing everybody’s secrets. Almost two hundred countries are screaming for legitimacy, but the one that stayed the most silent – except when berating, say, Iran, for not respecting “Internet freedom” – was the one whose legitimacy had already been eradicated by their violations of the values upon which their country was founded.

Passing over Eisenhower may have been the death-knell for American democracy, but it’s exposure may sound the beginning of a new era of human rights. Those coming online for the first time a few years or decades from now may be faced with a world altogether different from the one we now live in, perhaps partly in that they will have a choice between the monitored networks of Oceania or the liberal cryptarchies of Eurasia. The market will undoubtedly have its say in what happens after that.

For now though, there is a plan emerging. The hackers and the human rights activists, the net-freedom-blah people and the technophiles have been awakening from the post-Arab spring burnout and remembering the things that need to be done to prevent the next Mubarek. Better, simpler, more usable cryptography. Peer-to-peer, verifiable, anonymous monetary systems and democratic decision making systems. Secure communications and full transparency within governance.

During the transition to this new European future, a lot of data is going to have to be stored – refugee data seeking asylum from the terrors of the Anglo-American surveillance state. While the governments of Sweden and the UK may be somewhat too eager to share the data flowing through their resident data centers with their American pals, there are a few countries, notably Iceland, who are willing to provide a strong legal environment, cheap renewable energy, and good connectivity to the rest of the world. Data centers are not the future, but they are the present, and for now there’s an amazing business opportunity out there for countries who are willing to stand up and defend data sovereignty, the notion that individuals have the right to privacy and control over the data they generate.

To those who wish to practice data sovereignty before it becomes cool, I’d say: Come to Iceland. Bring data.

Some thoughts on working with GnuPG

A lot of people have complained about OpenPGP for a number of valid cryptographical reasons1,2. It doesn’t change the fact that it is widely used, and wildly useful. It urgently needs to be replaced with something more sensible, but for now we’re stuck with it. In practice, this also means that we are stuck with GnuPG, the most common and by far the best implementation of OpenPGP.

GnuPG is the one and only reference implementation of RFC 4880, and despite thousands of companies making use of OpenPGP in their infrastructre there is for all intents and purposes a solitary dude in Germany trying to keep it all together. Werner Koch is an absolute hero for managing to do that, and deserves our respect and support. Financially supporting the GnuPG project is also something people should be doing.

The following is however neccessary and hopefully constructive criticism of GnuPG.

One of the things I’m largely to blame for in Mailpile is the GnuPG interface. It’s a chunk of Python code that executes the GnuPG binary, tosses information at it, and figures out what to do with the output. There are lots of libraries for doing this, but after a great deal of exploration I found that all of the Python libraries that did this were insufficient for our needs, and the only thing crazier than manually forking out GnuPG in our situation would be to use the PGPME library.

PGPME is almost as confusing and annoying as calling GnuPG directly, but it also requires us to ship architecture-specific libraries to everybody, something we’re actively avoiding. Having to ship GnuPG binaries to Windows and MacOS users is bad enough, but dependency hell is a place we want to stay out of. If we were writing Mailpile in, say, C or C++, then PGPME would definitely be the library of choice, but we’re not, so it isn’t. On top of that, the available Python bindings for PGPME are very flaky (last updated in 2008!), and not developed or maintained by the GnuPG team.

As a result, we’ve got a roughly 1200 line chunk of code in Mailpile that has the fun and useful task of chatting with GnuPG, and the stupifyingly annoying task of working around all of GnuPG’s inconsistencies.

The problems with GnuPG seem to fall roughly into two broad categories: inconsistent output structure, inconsistent interfaces. These are both ripe with surprising behaviour and confusing failure modes. In addition to these categories, it appears that the larger meta problem is that no single statement about its problems is going to remain a stable statement, as these problems disappear and reappear at odd intervals as new versions are being built. The number of moving parts essentially leads to a lot of confusion about whether a particular bug exists in a particular version or not, and whether it is affected by wind speed. To wit, I have over the course of Mailpile development added, removed, and readded a workaround for a bug, although I think I’m safe to say that it does not exist post GnuPG 2.1. The comment of that workaround in the code illustrates the issue perfectly:

123456789101112131415161718192021222324252627282930

def list_secret_keys(self):
#
# Note: The "." parameter that is passed is to work around a bug
# in GnuPG < 2.1, where --list-secret-keys does not list
# details about key capabilities or expiry for
# --list-secret-keys unless a selector is provided. A dot
# is reasonably likely to appear in all PGP keys, as it is
# a common component of e-mail addresses (and @ does not
# work as a selector for some reason...)
#
# The downside of this workaround is that keys with no e-mail
# address or an address like alice@localhost won't be found.
# Therefore, this paramter should be removed when GnuPG >= 2.1
# becomes commonplace.
#
# (This is a better workaround than doing an additional
# --list-keys and trying to aggregate it though...)
#
# BRE: Put --fingerprint at the front and added selectors
# for the worlds MOST POPULAR LETTERS! Yaaay!
#
retvals = self.run(["--fingerprint",
"--list-secret-keys", ".",
"--list-secret-keys", "a",
"--list-secret-keys", "e",
"--list-secret-keys", "i",
"--list-secret-keys", "p",
"--list-secret-keys", "t",
"--list-secret-keys", "k"])
return self.parse_keylist(retvals[1]["stdout"])

This bug exists in the first category:

Inconsistent output structure

GnuPG generally accepts command line parameters, uses these to perform actions, and returns output. The output generally takes two forms:

Line by line descriptive output, such as when listing keys

Bulk output, such as when encrypting, decrypting, or signing

The line-by-line output has two modes, the normal mode where the data is tabulated with spaces into mostly nice, if somewhat confusing columns, and the --with-colons mode, where the spaces are replaced with colons, for easy parsing. This is quite clever and good. The problem arises when one intends to start parsing this data.

First, a word on discoverability. If you ever intend to do anything with GnuPG, you first need to read and internalize a document aptly titled DETAILS, which contains a lot of the details about what’s going on with GnuPG output. I have dutifully read, memorized chunks of, and bookmarked this file for posterity. It is immensely helpful. For example, it gives an example of GnuPG’s output:

In order to decipher what this all means, you need to refer to rest of the document. This shows the --with-colons format, which is the way we want to be working with it.

Now here comes issue the first: this is essentially a colon separated value (CSV!) data structure, but the data being provided is a) inconsistent, and b) structured.

Notably, the first output line says “there is a public key,” and the line after it says “here is a fingerprint.” Naively one might think that these are unrelated. But in fact, all of the lines from the one starting with pub up to the next one that starts with either pub or sec are actually details about the nature of the public key mentioned in the pub line – although to make things worse, the fpr lines after the sub lines refer to the sub line but not the pub line. Confused yet?

In reality, parsing this isn’t too terrible, but it can only be done in a reasonable way if you understand the structure of PGP keys and the output format of GnuPG. These are not reasonable assumptions for GnuPG to be making. Even armed with knowledge about the structure of keys and the handy DETAILS document, my first version of a parser was overly generic and terribly inefficient, because I kept trying to avoid inconsistencies.

Some of the columns are meaningless for some of the output lines, but more shockingly, some of the columns are MISSING sometimes. Three of the columns just simply evaporate if the line is an fpr-type line. On top of that, there’s no really good reason why the fingerprint needs to be a separate output line rather than just being added in at the right place. According to the DETAILS file, field 10 is for “User ID” – which is to say, the name, e-mail address, and comment associated with the key. Things that the fingerprint emphatically is not.

It this point you’ll notice that field 5 contains the Key ID. And for added pain, the key ID is variously the last 8 or the last 16 nibbles (hexadecimal digits) of the fingerprint.

Frustrated yet? Me too. But let’s just wave the rest of this category away, and move on to the next:

Inconsistent interfaces

So let’s imagine you want to generate a key. Sounds like a reasonable thing to do, right? So we’re all hip and cool and want to do so programatically with our shiny command line interface to GnuPG, so naturally we think it’ll look something like:

… or something to that effect. And have sensible defaults for any parameters that are skipped, or otherwise make them required. Right?

Wrong.

GnuPG does have a --gen-key flag, but when you call it you are dropped into an interactive interface where you are forced to answer questions, one at a time. In varying order, depending on the version, it seems.

The only sensible programmatic way to deal with this is to use “expect” style scripts, where your script captures the output and provides programmatic input depending on what the application last said. These used to be used a lot in the 80’s, but have fallen out of favour because: a) they make internationalization a nightmare, b) they make changing versions of software a nightmare, and c) they are almost never the right way to do anything.

They do work though. Kind of. Until they break, and it’ll be hell to debug them.

Now, avid users of GnuPG will at this point mention the --batch option, which allows in this case for providing options to the key generator in yet another format. Except, of course, that if you want to do something entirely reasonable like add more than one UID (for instance if you have multiple e-mail addresses) to a new key, you can’t. --batch just doesn’t support it.

So your options are to either painfully generate through using expect-style scripts, or use batch and then edit the key afterwards to add uids. Except that the --edit-key also relies on an interface which requires the use of expect-style scripts, so you just gained nothing.

Another thing that frequently happens when using encryption software with slow algorithms (such as secure pseudorandom number generation or RSA) is that you have to wait a long time for things to happen. When you’re making software with nice user interfaces, you sometimes start thinking that showing some kind of intermediate progress would be a nice thing to do. This is where we get to GnuPG’s wonderful status file descriptor.

Really, the status descriptor is awesome. It gives me lots of information that is valuable and can make life a lot better. There are however a few shortcomings. First, contrary to all other file descriptors that you may work with in GnuPG, the status descriptor is not guaranteed to give you a newline character at the end of a status, which renders a bunch of sensible methods of reading input from it unreliable and requires that I handle that descriptor with special magic. Nor are you strictly guaranteed to only get statuses. I have on occasion run into blank lines and other weirdness that needs to be stripped. Once those quirks are all managed, the status descriptor is actually invaluable and should not be overlooked — specially when mixed with the --enable-progress-filter flag.

The biggest complaint about the status descriptor is that it cannot be relied upon as a flow control mechanism. It does not always give output, or indicate the appropriate sequence of things, so an interface can use it for the purpose of increasing their information about the current situation, but not as a replacement for constant reading and parsing of the STDOUT and STDERR handles, and certainly not as a replacement for in-depth understanding of which order things happen in.

Actually, it should also be mentioned that as nice as it is to have all these descriptors, heavy use of descriptors turns into a world of problems on Windows. Windows is finicky enough as it is. Our solution was passing the status through to STDERR, which really works kind of fine.

Speaking of order, consider this handling of the passphrase descriptor — a special descriptor for accepting a passphrase sent by the user as part of a wrapper-mediated communication (because nobody ever uses pipes like that on the command line), from GnuPG’s gpg.c:

The interesting thing (aside from the annoying and dangerous lack of indentation on that if statement) is the way in which the passphrase is read from the password descriptor before the commands are managed. Which is to say, the passphrase must be sent, and, due to the way read_passphrase_from_fd is written, that descriptor closed on the sending end, before anything else happens. Which means that you need to know at the time of execution of the GnuPG binary that you need to send a passphrase, if you are going to do so programatically. This gives you three options: a) Send it every single time (requires storing the passphrase on the calling side, typically in insecure memory), b) Be willing to execute the same command twice, capturing potential errors on the first try and figuring out that they are due to a lack of passphrase — something the error message will not always be clear about, or c) keep track of the entirety of GnuPG’s internal state, which would be absolutely insane, even if it weren’t version dependent.

This behaviour is not obvious, or particularly reasonable, let alone documented. Figuring this out took a long time.

If you’ve seen Mailpile’s Windows and MacOS releases, you’ll have noticed that we are shipping slightly old versions of GnuPG. The reason for this is that we figured out pretty late that the passphrase-fd is not the correct way to do things and has been disabled in more recent versions of GnuPG in favour of expanded use of things that implement the gpg-agent mechanism. So Mailpile should be a gpg-agent.

(It is notable that several distributions still have GnuPG 1.4 as the default instead of GnuPG 2.x…)

The reason for this is that Mailpile provides a web interface, and in some of its use cases, it will do so from a server which is not necessarily capable of rendering a GTK window or provide a terminal prompt on the user’s device. So despite all of the reasons why people might not want to shift a PGP passphrase over a SSL connection, it might still be something people will want to do, and we need to be ready for that contingency. So we need to accept the passphrase through a web form, and pass it back to GnuPG one way or another. (Note: the generic case is Mailpile running on localhost, which is always a fine thing to do. Even over HTTP. Normal threat model limitations apply.)

All of this is weird and annoyingly inconsistent. This category of problems probably doubled our interface in size and complexity, and made error handling an absolute nightmare.

The Error Handling Issue

When writing a library like this, we need to be able to anticipate errors from GnuPG and respond appropriately. The number of different and confusing ways of receiving information also means that there are a number of different and confusing ways to receive error statuses and such. Sometimes the return value is useful, but frequently it is not. Sometimes there is something on the status descriptor, or on STDERR. Often both, sometimes neither. The entire thing is maddening.

The approach we’ve had to take is the opposite of what would be preferable. It is simply to check if the positive output we’re getting from GnuPG is roughly of the sort that we were expecting, and assume that if it isn’t, an error has occurred. As a general error handling strategy this is idiotic, we know, and we’d like it to stop.

What can be done?

The short answer is the same as Matt Green’s answer: It is time for PGP to die — or rather, RFC 4880 needs to be cleaned up, simplified, and replaced. PGP in its current form needs to evolve. There are a lot of very good reasons why, which Carlos has neatly catalogued. But realistically, PGP is what people use for e-mail, and until we have widespread adoption of crypto in e-mail at all, trying to replace PGP is just going to cause painful fragmentation. Since one of Mailpile’s goals is to get millions of people encrypting their e-mail by default, we can’t risk this fragmentation right now. If we round to the closest lakh, zero people currently encrypt their e-mail. This is scary and bad. The way forward is not to throw PGP out, but to start thinking seriously about what replaces RFC 4880.

But we’re stuck with RFC 4880. For now. A standard that is, for better or worse, being maintained entirely by one man.

Which gives us four options:

Stick with GnuPG and improve it substantially.

Fork GnuPG and improve it substantially.

Replace GnuPG with something simpler and more consistent.

Give up.

None of those approaches is good. I’m going to take option four off the table immediately because we’re not going to give up.

Option two is essentially the hostile version of option one, so I’ll write it off immediately. The people who’ve been developing GnuPG are great and we really like them. So we won’t be forking GnuPG anytime soon — heck, even if we did want to do that, we’d still not have any time to actually work on it.

Option three sounds most sensible long-term. Cruft is unavoidable, but Google’s End-to-End might potentially serve as the basis for “minimum viable PGP”. But End-to-End is also written in Javascript, and while people are entirely free to call me old-fashioned, I’d like the GnuPG replacement to be written in a compiled systems language.

But long term is long term. Short term, the only option is to stick with GnuPG.

I’d therefore like to propose the following:

GnuPG JSON Mode

As I mentioned, a lot of GnuPG’s output is actually structured a lot more than the output format supports. In our work so far, we’ve managed to build reasonable JSON structures out of that output for a lot of things. Completing that work and expanding on it, it would be possible to support something like this:

12345

$ gpg --json '{query}'
{response1}
{response2}
...
{responseN}

This would be relatively easy to build atop of GnuPG’s current source code, making the --json flag preempt all else in the way --batch currently does. Then it uses a well supported library to parse the query, figure out what it is doing, call the appropriate internal functionality, and return the right data structures, also JSON encoded.

In order to support intermediate results, status descriptor style, an arbitrary number of results is allowed. They need not be comma separated, because we want our input parser to be able to pick them up one by one. Rather, just end each response block with a newline.

Have GnuPG exit after the last response.

With this, anybody implementing a GnuPG interface will be able to do all the magic relatively easily. The data structures can be well documented. Everything can become easy. I will stop losing my hair.

Somebody might ask, what about PGPME? Frankly, PGPME is great for a particular subset of GnuPG users. They can keep using it if they want. But if --json exists and is consistent and comprehensive, everybody will use that. Trust me.

Conclusion

GnuPG is important and great in many ways, but it is also deeply broken and downright dangerous. The sooner it becomes a consistent tool, the sooner it will become something other than a fool’s errand to attempt to interface with it. I’m happy to be on the caravan of fools for now, but only if there is something worthwhile at the end of this quest.

Software is hard. Security software is harder. Werner is doing great at managing a very shit situation, created by RFC 4880. I think there is a real possibility to make GnuPG way better. For now, we need JSON mode. I’m sure crowdfunding this work is possible, because we need it. I for one will put some cash down for this bounty. Join me?

I was in Edinburgh some months ago visiting Bella Caledonia. I did
this talk there, trying to give some history and background to the
Icelandic constitutional process of 2010-2013, and putting it into a
context of Scottish independence.

Suffice to say, I think Scotland should be independant. I say at least
twice in this talk: EVERY reason that’s been given for people to vote
“no” is invalid.

The following is a transcript of my keynote lecture at FSCONS 2013. Releasing it now because my last post referenced it, and at SIF 2014 today, Carl Bildt essentially proved pretty much all the points I made here.

It is good to be here, it is always good to be here at FSCONS. More so than any other event I attend, to come here is to come home. Yet to come upon this stage is always a reminder that we have work to do, and this year, more than any previous year, we have work to do. In part that is perhaps because in previous years we were too lighthearted about the work we need to do, or too blasé or too busy doing other things. Of that I am as guilty as any of you. But we need to talk about this seriously now.

The work that needs to be done now exists for reasons that need no introduction. I’m going to try and talk about that work, and about about knowing, and about acting. I’m going to try and talk about fascism, though not in the sense we normally use the word. And I’m going to talk about the distinction between technology and politics, and how we allowed ourselves to be convinced by the fascists that such a distinction existed, and even those of us who are very much aware of the political implications of technology are often blind to the implications of those politics. And of course, I’m going to talk about what all of this has to do with Free Software.

This year has been a good year for knowing. We now know many things that we were not supposed to know, that those who intended us not to know were very serious about keeping from us. We also know that there is much more that we will know soon, and those who do not want us to know these things are struggling to figure out how to keep this knowledge from us. Their goal is ultimately to determine in which way they can cut off free speech without seeming to do so.

In England where I now reside there are discussions of how to prosecute those who know things that we should know, how to cause David Miranda to be rendered permanently persona non grata for the sole crime of having passed through an airport’s transit lounge. All is not as it should be. It would be ludicrous to claim that England were a democracy, but as many still make such claims it’s worth noting that these are not the actions of a democracy.

In light of Edward Snowden’s exposures of massive surveillance conducted by the United States Government, a lot of commentators from political, technical, social and mathematical angles have debated heavily the question famously framed by one from the country where Snowden sought refuge as Что делать? What is to be done?

In order to answer the question, the question must be asked. Unfortunately a lot of the public debate around the response to the revelations has avoided defining the actual problem and has fallen short in terms of defining concrete solutions.

Understanding the Problem

The problem created by the existence of ubiquitous surveillance conducted by a state in consortium with private actors falls into a few broad categories. There are issues which arise internally within the state in question, issues which arise externally in the international realm, then there are existential issues, and there are more general issues with the political trend.

I have recently spoken in other venues about the existential problem of ubiquitous surveillance, so I will not go deeply into that topic except to say that in the time since I did those speeches and wrote those essays, their harshness has not only been repeatedly justified but shown to be severely understated.

The existence of these systems is a fundamental threat to society.

The best way I have found to think of this is to think of nuclear weapons. Nuclear weapons have been used to murder around 260.000 people over the course of human history. The people who committed that crime have never been held to account, but having narrowly averted a mass extinction event, in part through actions taken in Berlin exactly 24 years and one day ago today, we now have roughly ten thousand of these devices in existence today. We don’t know where all of them are, but we know that they exist in a scarcity economy, they are countable, and they can be dismantled.

Surveillance technology does not have this feature. Software, being not subject to the same structures of scarcity as nuclear weapons are, can exist in uncountable copies throughout the Internet. We don’t know where Prism is, nor do we know on how many computers Boundless Informant runs. And we might never know. This means that for all intents and purposes, we must assume that the cold war of surveillance is one that can never actually end – not through the felling of any Iron Curtains.

The Digital Curtain is impervious to all the world’s Berliners.

The people who built these tools have not directly through them killed anybody, although indirectly these tools have doubtless facilitated state murder. However, the fundamental rights of at least 2.5 billion people have been violated through the creation of these tools, and within a narrow margin of possibility that we have not yet explored, the creators will never be held to account.

The Internal Problem

Internally within countries such as the United States and the United Kingdom, the problem of ubiquitous surveillance is one where the distinction between the inside and the outside is lost. In an episode of Battlestar Galactica from 2004, the protagonist Commander William Adama states that “There’s a reason you separate military and the police. One fights the enemies of the state, the other serves and protects the people. When the military becomes both, then the enemies of the state tend to become the people.” Here he echoes a sentiment more concisely expressed by Boroughs when he quipped that “a functioning police state needs no police.”

More recently, and less fictitiously, Eben Moglen stated in Westward the Course of Empire that:
“Military control ensured absolute command deference with respect to the fundamental principle which made it all ‘all right,’ which was: ‘No Listening Here.’ The boundary between home and away was the boundary between absolutely permissible and absolutely impermissible—between the world in which those whose job it is to kill people and break things instead stole signals and broke codes, and the constitutional system of ordered liberty.”

The internal problem of ubiquitous surveillance is that it amounts to a refutation of the individual’s ability to defend actions against government scrutiny. It does not, oddly, eliminate the presumption of innocence – formalized as ei incumbit probatio qui dicit,non qui negat, that the burden of proof lies with the accuser and not the accused – but rather allows the accuser to see all the cards, always. While some will argue that a just government should have the ability to be able to see all the cards at all times in the name of prevention of crime, such argumentation does not address the flawed logic of presuming that the government is just.

Once one makes such an assumption, as various commentators including former editor of The Independent, Chris Blackhurst, have done quite publicly of late, then any criticism of existing authority is automatically considered invalid, and any actions taken by existing authority are considered valid. Blackhurst argued that “If the security services insist something is contrary to the public interest, and might harm their operations, who am I to disbelieve them?”

In Robert Altemeyer’s The Authoritarians, he set up three criteria for a person being considered to have the psychological profile of a Right-Wing Authoritarian follower:

a high degree of submission to the established, legitimate authorities in their society;

high levels of aggression in the name of their authorities; and

a high level of conventionalism.

He further argues that “most people seem spring-loaded to become more right-wing authoritarian during crises.” All of these behavioral characteristics are demonstrated in spades by those journalists and pundits who have been most rabid in justifying government secrecy and denouncing those who would expose it, as a crisis of confidence is unraveling public trust of the presiding authorities.

In short, the internal problem of ubiquitous surveillance comes down to a question of legitimacy. In previous times, any government operating a highly efficient analogue of the Stasi would be deemed illegitimate and undemocratic, a government that imprisoned those who exposed wrongdoing would be considered to be rogue, and a government bent on preventing public discourse by sending thugs over to media outlets offices to drill holes in hard drives and set fire to computers would be considered despotic at the very least. A government has no legitimacy when it spies on its citizens and lies about it perjuriously, covers up systematic war crimes and throws those who exposed them in prison for 35 years, and holds people without trial for investigating leaked evidence of criminal wrongdoing. The crisis of modern western democracy is a crisis of legitimacy.

The External Problem

Externally, there is a diplomatic problem. The crisis created by Edward Snowden’s revelations are pushing diplomatic boundaries in ways that even Chelsea Manning’s revelations didn’t, with Obama refusing to visit Putin, Rousseff refusing to visit Obama, and Morales being forced to visit Fischer by Portuguese, French, Spanish and Italian airspace authorities. If you had been cryogenically frozen during the Cold War, then thawed out in 2013 and had this situation explained to you, you wouldn’t believe any of it.

In particular, you’d have trouble grocking the fact that a post-dictatorial South America appears to be the most vigilant in upholding the spirit of the Universal Declaration of Human Rights, while Western European and American authorities are vigorously defending the exact same kind of activities that they previously used so as to define the USSR as the enemy.

Since nation states came into existence, there has been a general understanding that every government spies on every other government to the extent they can, without being overly aggressive, overt or unsubtle. This diplomatic allowance has nevertheless not been assumed to extend to the general public or to industry, although at various times various governments have overstepped those bounds and been given a stern talking to. However, since the time when Henry L. Stimson proclaimed that “Gentlemen do not read each other’s mail,” in his closing of the Black Chamber – an artifact of US military imperialism that Stimson, in 1929, considered to be outdate and inappropriate – there has been a growing anxiety relating to government interception of cross-border telecommunications, to no small degree fueled by the globalization of trade and the concentration of the world’s communications onto a few hundred undersea fiber optic channels.

The external problem, then, becomes one of trust. The gentleman’s agreement to conduct only the minimum amount of spying necessary to protect national interests, and only on public officials of the governments in question, which is very subtly semi-formalized in the Vienna convention, is there to make sure that allies can trust each other, enemies can still conduct trade, and everybody could more-or-less get along. Indeed: during World War I, the UK and Germany, while being at war with each other, were the world’s single most active pair of trade partners. When that trust is broken, it presents a threat to international diplomacy, it upsets international trade, and it makes the founding of any new diplomatic alliances way more complicated than it already was.

The fallout of this is becoming clear: Brazil is going to run its own fiber optics to Europe and finance the creation of alternative systems for e-mail to contend with American commerical offerings, while various other countries are considering measures as far apart as trade sanctions against the US, self-balkanization from the Internet á-la China, or overhauls of internal government communication standards. Very few governments are entirely blasé about this, and none should be.

The larger trend problem

Underlying all of this is a worrying trend. Over the last decade, the pendulum of cultural liberalism has swung back in many ways, with wars on terrorism, drugs, etc becoming all the more central to discussions globally. Inequality has grown and authoritarianism on the rise.

This authoritarianism is not the crude, forceful authoritarianism of previous centuries, where brutal measures were taken against all that opposed the regime, but a softer, more subtle form of authoritarianism, derived from the right wing branch of nationalism known as fascism. In order to prevent people from rising up against them, the people must be subdued and convinced that the life they lead is not too bad and that it could be worse. When I was a child, my grandmother used to say “think of the children in Africa.” Without meaning to say that my grandmother was a fascist, I recognize that this form of discourse is a subtle part of the cultural fascism that we have become accustomed to.

Fascism has become the dominant political system of the world, under the traditional definition of fascism rather than the more modern catch-all if-shoe-fits definition, but various aspects of how it came to prominence – through ­­agreements, diplomacy and skirting of poorly enforced or unenforced rules. both explicit and implicit – have led to it not being noticed by most people. The fact that this is the case has led us to a point where the likes of NSA are an inevitability, but so are the likes of Monsanto, Northrup-Grumman, JP Morgan, Microsoft, and so on.

Fascism: The perfect union of state and business.

Let’s not lose track of what we’re talking about. Fascism in this form is also known as a “mixed economy”. You might have noticed how Nordic social democracy is all about the promotion of mixed economies, but in practice, this means that the governments support certain large companies directly or indirectly with monopoly rights, procurements, grants and so on, while leaving what Venkatesh Rao called the “Jeffersonian middle class” in the gutter.

Sweden is proof that Fascism can be pleasant.

Last month, US Senator Dianne Feinstein suggested that “if you want to find a needle in a haystack, you first must have a haystack,” as a justification for the creation of massive databases detailing nigh every aspect of every individual’s life. In response, ex-FBI agent Coleen Rowley wrote that “Of course self-righteous builders of massive haystacks are not inclined to point out that it’s inherently easier to find a needle if it isn’t covered with hay,” pointing out the logical fallacy behind the argument but not deepening our understanding of the internal logic of a governance structure where such statements are considered reasonable. A “Feinstein’s Haystack” can be defined as a problem that has been created for the purpose of creating the impression that it is being solved. In order to retain authority, legitimacy is required. The most efficient way to gain legitimacy is to impress on ones followers that the role of the authority is justified and the holder of the authority is necessarily the best suited for the job. Through the creation of this institutionalized make-work, authoritiaran leaders retain legitimacy – even when the justifications are illogical.

One sees similar logic deployed globally to justify direct – if subtle – atrocities committed against humanity. Not so much a victimless crime as a crime that the victims won’t notice until it’s too late.

A Cost Estimation

Let’s run some numbers on this.

About 2.5 billion people are affected by NSA’s surveillance activities. This is an estimation of the number of people using the Internet in the world, a number that can be expected to grow quite substantially over the next several years. To break this number down a bit, current estimates put the number of users of e-mail globally at 1.9 billion individuals as a conservative estimate, with 2.3 billion being a more likely reality. Facebook has 1.15 billion users, Skype has around 600 million users, Twitter is of similar size. Dropbox has 175 million users.

Over a billion Android smartphones and tablets are in circulation, and over 250 million Apple iPhones and iPads. Amongst e-mail users, roughly 435 million people use GMail, 325 million use Outlook.com (formerly Hotmail), and 298 million using Yahoo! Mail. The top ten e-mail providers in aggregate host between 70-90% of all (legitimate) e-mail accounts, with the top fifty providers accounting for close to an estimated 99% of the e-mail market.

Further: During a single day last year, the NSA’s Special Source Operations branch collected 444,743 e-mail address books from Yahoo, 105,068 from Hotmail, 82,857 from Facebook, 33,697 from Gmail and 22,881 from unspecified other providers. This gives some idea of the relative internal security capacities of these core vendors. It has long been known that Yahoo’s operational security is quite bad as far as user privacy is concerned.

The DNI (Director of National Intelligence) budget is about 52 billion dollars per year. That covers NSA, CIA and some other things, but it does not include US Cyber Command, ONI (Office of Naval Intelligence), any US Airforce surveillance activities, research done at the National Defense University and other similar organizations, nor does it include surveillance conducted by other five eyes partners. Adding those other aspects, it’s not a stretch to guess that the total budget is $120 billion/year.

$120 billion over 2.5 billion people over 365 days a year gives us a cost estimation of this catch all surveillance of about $0.13 per person per day. Let’s call that PPV: Price Per day of Violation. This is incredibly cost effective for the surveillance states. Of course, a lot of the $120bn are going to various tasks which are not directly related to spying on the general public – everything from keeping the floors clean at Fort Meade down to conducting drone strikes on people in Pakistan.

But since we don’t know the exact division and all of these things factor into the same system of systematic human rights violations, let’s just use the total figure. Actually, this is also better for the following analysis because it assumes their capacity to be greater than it actually is, which is to say that the biased assumption that pervasive ubiquitous surveillance is bad leads us to want to overestimate rather than underestimate the total surveillance capacity. Of course, if it were possible, we would prefer to be accurate, but the asymmetric clandestine nature of the surveillance measures makes accuracy hard.

Raising the Stakes

A lot of people have been asking “how do we reclaim our privacy”? The answer to that is an economic one. The total global surveillance budget is finite and subject to a lot of real world restrictions. It cannot grow indefinitely. However, we can raise the cost of each privacy violation substantially.

This requires a three pronged attack: technological development, policy advocacy, and litigation. The technology side is likely to be the biggest individual contributor, but we should not discount the benefits of influencing policy makers and dragging offenders through the legal system.

The goal of those interested in protecting human rights should be to raise the average cost of surveillance to $10.000 per person per day within the next five years. This reduces the effective surveillance capacity to about 32.000 people, assuming no budget changes, which strictly promotes targeted surveillance and careful planned target acquisition. In reality, this will be a lower number simply due to the expected increase of Internet users over the next five years and the associated scaling costs with low level traffic analysis.

How to get to $10k PPV?

First, let’s talk about litigation options. The fine people at Privacy International (support their work!) are currently working on taking the seven largest telecoms providers in the world to court over fiber optics surveillance, based on violations of article 8 of the European Convention on Human Rights. The Electronic Frontier Foundation (support them too!) is involved in multi-district litigation against the NSA and various other parties. These two organizations are doing remarkable and amazing work, but they do have limitations on how much they can accomplish, and there is a lot of stuff that they can’t reasonably cover. If they get more money, they can do more things. This is kind of obvious, but seriously consider contributing.

Amongst the many untapped legal options is directly suing various providers, such as Verizon, AT&T, T-Mobile, Apple, Yahoo!, Google, Microsoft, Amazon, SWIFT, Barclays, ABN AMRO, Deutsche Bank, UBS. Why so many banks? Because it isn’t just the Internet that is being monitored.

On top of this, it might be worth considering lawsuits against governments directly. This will be harder to do, but if won, these would have a substantial effect on the situation.

The reason this will be effective in raising the bar is that it will make the various private entities involved feel a direct bottom line impact on their businesses resulting from their collusion with state actors, which will lead them to push back to a much more significant degree than they have so far.

Litigation however will only get us so far. A large amount of policy work is needed in order to fix the current situation. Specifically, numerous international agreements need to be reconsidered and renegotiated. Cross-border data protection agreements should be looked at, and similarly the Wassenaar agreement needs anything touching on cryptography taken out of it. Laws within countries can be improved, in particular data protection laws and laws regarding cryptography. Countries that require key escrowing for instance need to stop doing that.

The Tax Issue

If you happen to be living in one of the Five Eyes countries, the numbers game gets a bit more complicated by sheer virtue of taxes. You see, unless you are dodging taxes, you’re actually funding the adversary. That means that if you start a company around the issue of protecting privacy, base it in a Five Eyes country, and you don’t pull a double-Irish or some other trickery to get out of paying taxes, you’re going to be funding both sides of the battle. In a sense, this fact makes tax-avoiding companies like Google and Facebook somewhat better, in that at least they aren’t funding the surveillance state.

Technical Solutions to Political Problems

Then there’s technology. Although policy and litigation approaches are useful, they will not do anywhere near as much to raise the PPV as improvements to technology. Here, technologists like many of us must first admit a few things to themselves, and then devise a strategy that is likely to succeed.

In the late eighties and early nineties, we could be forgiven for caring about technology. We were busy building an operating system, we were exploring the reality that is afforded to us when we can control every part of our computers, from bootloaders, keyboards and disk I/O up through graphics adapters, graphical user interfaces, networks and even Perl. We were a nascent breed who could do anything, and the technology was exciting.

Now, we’re a bit further down that particular road and we have to stop taking the political consequences of Free Software for granted – as many of us unfortunately do. Even those of us who are the most politically aware sometimes subtly mistake arbitrary decisions about the protocol we use, the cryptosystem we employ, or whether we zero index our arrays, as being purely technical decisions. And while I’ve not yet fully comprehended the political implications of using a red-black tree rather than a binary tree, it is a well documented fact that choosing ASN1 over C-strings can have far reaching political implications.

On top of that – sorry guys – but we suck at design. We suck so much at design that many of us still think a command line is a great user interface, and many of you will defend that stance strongly. Don’t get me wrong, I love the command line, but the command line is a language for people who care about technology. Good user experiences should not require a user to care about technology. In one sense, that comes down to the crux of the problem: Many of us in the free software movement care more about technology than we care about people. Software over wetware. That’s a political stance too.

That brings us to what is to be done – Что делать.

After I had prepared this talk, I found time to watch the intervention Bruce Schneier made at the IETF conference in Vancouver last week, and found that almost everything I had to say had been rendered redundant. Nevertheless, let me give you the outline – and please then go and listen to Bruce.

Moving everything we control from centralized to decentralized infrastructures is the first step. This is one many of us have cared about for years, but it’s a step that the numbers I previously mentioned show that we have been failing in.

Technology is always political, and how even small design decisions made by software developers can have a drastic effect on the political outcomes over long or short periods of time. I’d like to suggest that software developers generally need to start developing like they give a damn about the society they live in – which may be true of the free software movement to a certain but absolutely insufficient degree, and is entirely untrue of those software developers who have not thrown in their lot with the free software movement.

Specifically, I want to rabidly attack the notion that usability and functionality are at odds with each other, and the idea that presenting users with a half baked system where they need to break out the command line whenever things don’t operate within some arbitrary parameters of normalcy is in some way acceptable. Most people don’t care about technology, they care about doing the things that are meaningful to them. They don’t want to spend all day fiddling with GnuPG’s parameters or figuring out whether their XMPP session is being transferred over SSL. They don’t want to know about IPSec or AES.

No. They want to be farmers, or merchants, or dentists or doctors. They want to teach our children languages and mathematics. They want to build houses or spaceships or plumbing or bridges or roads. They don’t have time to work with bad technology that we made badly because we didn’t care about them.

What’s worse: when companies that don’t care about those people either give them highly usable software that doesn’t respect their fundamental rights, most people will go for it because despite its failings, it at least gets the job done. If what we offer them as an alternative is not at least as good in terms of getting the job done – from the perspective of a nontechnical user, it does not matter at all how ideologically pure our offering is.

Software that helps 100 people do something wonderful is absolutely meaningless if it’s unusable by the next five billion people.

Bottom line: If you’re developing software and you aren’t developing that software for the benefit of all humanity, you are helping the fascists.

What needs to happen now is pretty simple: We need to migrate the next billion people off centralized infrastructures and give them strong crypto, and we need to do that over the course of the next five years, at maximum. We must not fail this task. Over a longer timeframe, we must expand this to everybody.

Decentralizing everything, encrypting everything, and hardening all of the endpoints, will not get us out of the fascism we have found ourselves in. Engineering our way out of fascism is a necessary step, but not a sufficient step. We need to fundamentally restructure our societal governance models, but we’ll get to that. That’s later. This is now. We are technologists. Let’s make what tech we can.