As I mentioned earlier, the information in the documents released by Edward Snowden show a clear pattern of corporate and government abuse of the information networks that are now deeply intertwined with the lives of many people all over the world.

Surveillance is a power dynamic where the party doing the spying has power over the party being surveilled. The surveillance state that results when one party has "Global Cryptologic Dominance" is a seriously bad outcome. The old saw goes "power corrupts, and absolute power corrupts absolutely". In this case, the stated goal of my government appears to be absolute power in this domain, with no constraint on the inevitable corruption. If you are a supporter of any sort of a just social contract (e.g. International Principles on the Application of Human Rights to Communications Surveillance), the situation should be deeply disturbing.

One of the major sub-threads in this discussion is how the NSA and their allies have actively tampered with and weakened the cryptographic infrastructure that everyone relies on for authenticated and confidential communications on the 'net. This kind of malicious work puts everyone's communication at risk, not only those people who the NSA counts among their "targets" (and the NSA's "target" selection methods are themselves fraught with serious problems).

Without knowing the specifics of how the various surveillance mechanisms operate, the public in general can't make informed assessments about what they should consider to be personally safe. And lack of detailed technical knowledge also makes it much harder to mount an effective political or legal opposition to the global surveillance state (e.g. consider the terrible Clapper v. Amnesty International decision, where plaintiffs were denied standing to sue the Director of National Intelligence because they could not demonstrate that they were being surveilled).

It's also worth noting that the advocates for global surveillance do not themselves want to be surveilled, and that (for example) the NSA has tried to obscure as much of their operations as possible, by over-classifying documents, and making spurious claims of "national security". This is where the surveillance power dynamic is most baldly in play, and many parts of the US government intelligence and military apparatus has a long historyof actingin bad faith to obscure its activities.

The people who have been operating these surveillance systems should be ashamed of their work, and those who have been overseeing the operation of these systems should be ashamed of themselves. We need to better understand the scope of the damage done to our global infrastructure so we can repair it if we have any hope of avoiding a complete surveillance state in the future. Getting the technical details of these compromises in the hands of the public is one step on the path toward a healthier society.

Postscript

Lest I be accused of optimism, let me make clear that fixing the technical harms is necessary, but not sufficient; even if our technical infrastructure had not been deliberately damaged, or if we manage to repair it and stop people from damaging it again, far too many people still regularly accept ubiquitous private (corporate) surveillance. Private surveillance organizations (like Facebook and Google) are too often in a position where their business interests are at odds with their users' interests, and powerful adversaries can use a surveillance organization as a lever against weaker parties.

But helping people to improve their own data sovereignty and to avoid subjecting their friends and allies to private surveillance is a discussion for a separate post, i think.

Yesterday, i said a sad goodbye to an old friend at ABC No Rio. Cookiepuss was a steadfast companion in my volunteer shifts at the No Rio computer center, a cranky yet gregarious presence. I met her soon after moving to New York, and have hung out with her nearly every week for years.

She had the run of the building at ABC No Rio, and was friends with all sorts of people. She was known and loved by punks and fine artists, by experimental musicians and bike mechanics, computer geeks and librarians, travelers and homebodies, photographers, screenprinters, anarchists, community organizers, zinesters, activists, performers, and weirdos of all stripes.

For years, she received postcards from all over the world, including several from people who had never even met her in person. In her younger days, she was a ferocious mouser, and even as she shrank with age and lost some of her teeth she remained excited about food.

She was an inveterate complainer; a pants-shredder; a cat remarkably comfortable with dirt; a welcoming presence to newcomers and a friendly old curmudgeon who never seemed to really hold a grudge even when i had to do horrible things like help her trim her nails.

After a long life, she died having said her goodbyes, and surrounded by people who loved her. I couldn't have asked for better, but I miss her fiercely.

The good birds at Riseup have been tireless advocates for information autonomy for people and groups working for liberatory social change for years. They have provided (and continue to provide) impressive, well-administered infrastructure using free software to help further these goals, and they have a strong political committment to making a better world for all of us and to resisting strongarm attempts to turn over sensitive data. And they provide all this expertise and infrastructure and support on a crazy shoestring of a budget.

(note: i have worked with some of the riseup birds in the past, and hope to continue to do so in the future. I consider it critically important to have them as active allies in our collective work toward a better world, which is why i'm doing the unusual thing of asking for donations for them on my blog.)

Occasionally, someone asks me whether we should encourage use of the --ask-cert-level option when certifying OpenPGP keys with gpg. I see no good reason to use this option, and i think we should discourage people from trying to use it. I don't think there is a satisfactory answer to the question "how will specifying the level of identity certification concretely benefit anyone involved?", and i don't see why we should want one.

gpg gets it absolutely right by not asking users this question by default. People should not be enabling this option.

Some background: gpg's --ask-cert-level option allows the user who is making an OpenPGP identity certification to indicate just how sure they are of the identity they are certifying. The user's choice is then mapped into four levels of OpenPGP certification of a User ID and Public-Key packet, which i'll refer to by their signature type identifiers in the OpenPGP spec:

0x10: Generic certification

The issuer of this certification does not make any particular assertion as to how well the certifier has checked that the owner of the key is in fact the person described by the User ID.

0x11: Persona certification

The issuer of this certification has not done any verification of the claim that the owner of this key is the User ID specified.

0x12: Casual certification

The issuer of this certification has done some casual verification of the claim of identity.

0x13: Positive certification

The issuer of this certification has done substantial verification of the claim of identity.

Most OpenPGP implementations make their "key signatures" as 0x10 certifications. Some implementations can issue 0x11-0x13 certifications, but few differentiate between the types.

By default (if --ask-cert-level is not supplied), gpg issues certificates ("signs keys") using 0x10 (generic) certifications, with the exception of self-sigs, which are made as type 0x13 (positive).

When interpreting certifications, gpg does distinguish between different certifications in one particular way: 0x11 (persona) certifications are ignored; other certifications are not. (users can change this cutoff with the --min-cert-level option, but it's not clear why they would want to do so).

So there is no functional gain in declaring the difference between a "normal" certification and a "positive" one, even if there were a well-defined standard by which to assess the difference between the "generic" and "casual" or "positive" levels; and if you're going to make a "persona" certification, you might as well not make one at all.

And it gets worse: the problem is not just that such an indication is functionally useless; encouraging people to make these kind of assertions actively encourages leaks of a more-detailed social graph than just encouraging everyone to use the default blanket 0x13-for-self-sigs, 0x10-for-everyone-else policy.

A richer public social graph means more data that can feed the ravenous and growing appetite of the advertising-and-surveillance regimes. i find these regimes troubling. I admit that people often leak much more information than this indication of "how well do you know X" via tools like Facebook, but that's no excuse to encourage them to leak still more or to acclimatize people to the idea that the details of their personal relationships should by default be public knowledge.

Lastly, the more we keep the OpenPGP network of identity certifications (a.k.a. the "web of trust") simple, the easier it is to make sensible and comprehensible and predictable inferences from the network about whether a key really does belong to a given user. Minimizing the complexity and difficulty of deciding to make a certification helps people streamline their signing processes and reduces the amount of cognitive overhead people spend just building the network in the first place.

However, some tools (gpg, enigmail among others) ask the user to provide a "Comment:" field when they are choosing a new User ID (e.g. when making a new key). These UI prompts are evil. The savvy user knows to avoid entering anything in this field, so that they can end up with a User ID like the one above. The user who provides something here (perhaps even something inconsequential like "I like strawberries", due to not being sure what should go in this little box) will instead end up with a User ID like:

Jane Q. Public (I like strawberries) <jane@example.org>

This is bad. This means that Jane is asking the people who certify her key+userid to certify whether she actually likes strawberries (how could they know? what if she changes her mind? should they revoke their certifications?) and anywhere that she is referred to by name will include this mention of strawberries. This is not Jane's identity, and it doesn't belong in an OpenPGP User ID packet.

Furthermore, since User IDs are atomic, if Jane wants to change the comment field (but leave her name and e-mail address the same), she will instead need to create a new User ID, publish it, get everyone who has certified her old key+userid to certify the key+newuserid, and then revoke the old one.

It is difficult already to help people understand and participate in the certification network that forms that backbone of OpenPGP's so-called "web of trust". These bogus comment fields make an already-difficult task harder. And all because of strawberries!

Tools like enigmail and gpgshould not expose the "Comment:" field to users who are generating keys or choosing new User IDs. If they feel it absolutely must be present for some weird corner case that 0.1% of their users will have, they could require that the user enters some sort of "expert mode" before prompting the user to do something that is likely to be a mistake.

There is almost no legitimate reason for anyone to use this field. Let's go through some examples of this people use, taken from some examples i have lying around (identifying marks have been changed to protect the innocent who were duped by this bad UI choice, but you can probably find them on the public keyserver network if you want to hunt around):

domain repetition

John Q. Public (Debian) <johnqpublic@debian.org>

We know you're with debian already from the @debian.org address. If this is in contrast to your other address (johnqpublic@example.org) so that people know where to send you debian-related e-mail, this is still not necessary.

Lest you think i'm just calling out debian developers, people with @ubuntu.com addresses and (Ubuntu) comments (as well as @example.edu addresses and (Example University) comments and @example.com addresses and (Example Corp) comments) are out there too.

nicknames already evident

John Q. Public (Johnny) <johnqpublic@example.net>
John Q. Public (wackydude) <wackydude@example.net>

Again, the information these comments are providing offers no clear disambiguation from the info already contained in the name and e-mail address, and just muddies the water about what the people who certify this identity should actually be trying to verify before they make their certification.

"Work"

John Q. Public (Work) <johnqpublic@example.com>

if John's correspondents know that he works for Example Corp, then "Work" isn't helpful to them, because they already know this as the address that they're writing to him with. If they don't know that, then they probably aren't writing to him at work, so they don't need this comment either. The same problem appears (for example) with literal comments of (School) next to their @example.edu address.

This is my nth try at this crazy system!

John Q. Public (This is my second key) <johnqpublic@example.com>
John Q. Public (This is my primary key) <johnqpublic@example.com>
John Q. Public (No wait really use this one) <johnqpublic@example.com>

OpenPGP is confusing, and it can be tricky to get it right. We all know :) This is still not part of John's identity. If you want to designate a key as your preferred key, keep it up-to-date, get people to certify it, and revoke or expire your old keys. People who care can look at the timestamps on your keys and tell which ones are the most recent ones. You do have a revocation certificate for your key handy just in case you lose it, right?

Don't use this key

John Q. Public (Old key, do not use) <johnqpublic@example.com>
John Q. Public (Please only use this through September 2004) <johnqpublic@example.com>

This comment refers to the strength of the key material, or the algorithms preferred by the user. Since the User ID is associated with the key material already, people who care about this information can get it from the key directly. This is also not part of the user's actual identity.

"no comment"

John Q. Public (no comment) <johnqpublic@example.com>

This is actually not uncommon (some keyservers reply "too many matches!"). It shows that the user is witty and can think on their feet (at least once), but it is still not part of the user's identity.

But wait (i hear you say)! I have a special case that actually is a legitimate use of the comment field that cannot be expressed in OpenPGP in any other way!

I'm sure that such cases exist. I've even seen one or two of them. The fact that one or two cases exist does not excuse the fact that that overwhelming majority of these comments in OpenPGP User IDs are a mistake, caused only by bad UI design that prompts people to put something (anything!) in the empty box (or on the command prompt, depending on your preference).

And this mistake is one of the thousand papercuts that inhibits the robust growth of the OpenPGP certification network that some people call the "web of trust". Let's avoid them so we can focus on the other 999 papercuts.

Please don't use comments in your OpenPGP User ID. And if you make a user interface for OpenPGP that prompts the user to decide on a new User ID, please don't include a prompt for "Comment" unless the user has already certified that they are really and truly a special special snowflake.

Writing up the documentation makes me realize that i don't know of any software tools designed specifically for facilitating fabric/craft construction. Some interesting software ideas:

Make 3-D models showing the partly assembled pieces, derived from the flat pattern. Maybe something like blender would be good for this?

Take a 3D-modeled form and produce some candidate patterns for cutting and sewing? This seems like it is an interesting theoretical problem: given a set of (marked?) 3D surfaces and a set of approximation constraints, have the tool come up with a reasonable set of 2D patterns that could be cut and assembled using a set of standard operations into something close to the 3D shape.

a "pattern lint checker" (maybe an inkscape extension?) that would let you mark certain segments of an SVG as related to other segments (i.e. the two sides of a seam), and could give you warnings when one side was longer than the other (within some level of tolerance)

i have a colleague who is forced by work situations to use Windows. Somehow, I'm the idiot^W^W^W^W^Wfriendly guy who gets tapped to fix it when things break.

Well, this time, the power supply broke. As in, dead, no lights, no fan, no nothing. No problem, though, the disk is still good, and i've got a spare machine lying around; and the spare is actually superior hardware to the old machine so it'll be an upgrade in addition to a fix. Nice! So i transplant the disk and fire up the new chassis.

But WinXP fails to boot with a lovely "0x0000007b" BSOD. The internet tells me that this might mean it can't find its own disk. OK, pop into the new chassis' BIOS, tell it to run the SATA ports in "legacy IDE" mode, and try again.

Now we get a "0x0000007e" BSOD. Some digging on the 'net makes me think it's now complaining now about the graphics driver. Hmm. Well, i figure i can probably work around that by installing new drivers from Safe Mode. So i reboot into Safe Mode.

Success! It boots to the login screen in Safe Mode. And, mirabile dictu, i happen to know the Administrator password. I put it in, and get a message that this Windows installation isn't "activated" yet -- presumably because the hardware has changed out from under it. And by the way, i'm not allowed to log in via safe mode until it's activated. So please reboot to "normal" Windows and activate it first.

Except, of course, the whole reason i'm booting into safe mode was because normal Windows gives a BSOD. Grrrr. Who thought up this particular lovely catch-22?

OK, change tactics. Scavenging the scrap bin turns up a machine with a failed mainboard, but a power supply with all the right leads. It's rated for about 80W less than the old machine's failed supply, but i figure if i rip out the DVD-burner and the floppy drive maybe it will hold. Oh, and the replacement power supply doesn't physically fit the old chassis, but it hangs halfway out the back and sort of rattles around a bit. I sacrifice the rest of the scrap machine, rip out its power supply, stuff the power supply into the old chassis, swap the original disk back in, and ... it boots successfully, finally.

That was the shorter version of the story :P

So now my colleague has a horrible mess of a frankencomputer which is more likely to fail again in the future, instead of a nice shiny upgrade. Why? Because Microsoft's need to control the flow of software takes priority over the needs of their users.

This is what you get when you let Marketing and BizDev drive your technical decisions.

Better debugging tools can help us understand what's going on with MIME messages. A python scrap i wrote a couple years ago named printmimestructure has been very useful to me, so i thought i'd share it.

It reads a message from stdin, and prints a visualisation of its structure, like this:

It feels silly to treat this ~30 line script as its own project, but i don't know of another simple tool that does this. If you know of one, or of something similar, i'd love to hear about it in the comments (or by sending me e-mail if you prefer).

If it's useful for others, I'd be happy to contribute printmimestructure to a project of like-minded tools. Does such a project exist? Or if people think it would be handy to have in debian, i can also package it up, though that feels like it might be overkill.