A. Karl Kornel's personal blog

Main menu

Post navigation

I work at a University that is a member of InCommon. One of the benefits of joining InCommon is getting access to an unlimited number of TLS (SSL) certificates (including EV, client, and code-signing certs). I recently decided that, instead of a traditional RSA cert, I wanted to get an ECC certificate. In this post, I explain how to use OpenSSL to generate an ECC certificate request, in a way that InCommon (and COMODO) will accept.

Recently, I wanted to read about the NSA’s Commercial National Security Algorithm (or CNSA) Suite, which is their replacement to the Suite B algorithms. The web site for the CNSA Suite is https://www.iad.gov/iad/programs/iad-initiatives/cnsa-suite.cfm, but if you go there now on a Mac, you’ll probably get a security warning. The reason is, this web site uses a certificate issued by the DoD, and I didn’t have them installed. How did I get them installed? Read on!

Signs of Trouble

Monday morning, as I was on the bus to work, I received a very curious email from GitHub.

Hello akkornel,

GitHub has recently implemented new measures to identify and block insecurely generated SSH keys from being added to accounts. GitHub has also analyzed all existing keys that were added before this additional validation was in place.

As a result of these new measures the following key was identified and removed from your account:

Yubikey-based key
58:1a:a7:99:fd:14:3e:3b:f9:67:a5:ca:d4:00:cb:dd

…

My first thought was that I was being phished somehow. I assumed that the key fingerprint was valid—GitHub’s API allows anyone to get any user’s public keys—but the email was a plain-text mail with no links and no attachments, so it didn’t seem like phishing. That, plus the fact that my SSH key description was included in the email (something that isn’t something available publicly), made me concerned.

A quick aside: After reading the Yubico post, I went to post it to Hacker News, only to find it had already been posted overnight. Yubico had apparently (and likely accidentally) posted about the issue the previous evening, and the post was later pulled. The Hacker News item hadn’t gotten much attention, but since Yubico’s post was back, I emailed the Hacker News mods, and it got re-posted. Kudos to them for that!

Making New Keys

My subkeys were affected because I used the Yubikey to generate the private keys. It’s pretty easy to do! Assuming that you already have a private key…

Insert your Yubikey NEO or Yubikey 4.

Use gpg --card-status to make sure that GPG can see your key.

Run gpg --edit-key YOUR_KEY_ID addcardkey to have the card generate a key for one of the three slots (encryption, authentication, or signing). You’ll need your GPG key’s passphrase, the Yubikey’s PIN, and also the admin PIN.

I used the above method to generate my signing and authenticating subkeys, so the Infineon chip handled all the private key generation. GPG then took the public part of the key (provided by the chip), signed it with my main private key, and added the result to my public GPG key. As for the private key, GPG stores the serial number of the device, so when it needs the private key, GPG knows which device to ask for!

The upside of all of this is that it’s really hard to get the private key out from where it is stored. The downside is that I’m trusting in the hardware to generate a good key.

Yubico fixed this issue with firmware 4.3.5, but since Yubikey firmware can’t be upgraded, I had to get a new one. Happily, Yubico are providing free replacements for people who have vulnerable Yubikeys. I put my order in on Monday, and the replacement was in my mailbox by Friday. Kudos to Yubico for the transparency and for offering free replacements!

Another aside: I was thinking of generating ECC keys instead of RSA keys, because the vulnerability only affected RSA key generation. Unfortunately, the Yubikey 4 implements version 2.1 of the OpenPGP Card specification. ECDSA keys were added in Version 3.0, so I’m stuck with RSA keys for now. Also, OpenSSH doesn’t support ECDSA on cards right now, so the most I’d be able to do is a signing ECC key.

Once I had the replacement, I generated new authentication and signing subkeys, and copied my existing encryption subkey to the card. I also revoked my original signing and authentication subkeys. The updated public key is available directly now, and will be live on the key servers over the next day or two.

A third aside! If you copy a subkey to a new card, but GPG keeps asking for you to insert your old card, you’re probably being hit by GnuPG T1983. Either update GPG or delete the appropriate files from the ~/.gnupg/private-keys-v1.d directory.

Spreading the Word

There are two things about GPG signatures that are working against me here:

I don’t have a record of every signature I made.

GPG signatures include a timestamp, but that comes from the computer’s clock; if someone has my private key, they just need to change their computer’s clock to a time before I revoked my sub-key.

Probably #1 could be solved by better record-keeping, but there’s no way I’d be able to keep the Notary Public-level records that would be required. Problem #2 can be solved by a third-party timestamp service, which others would have to trust, and which I would have to remember to use.

The best I can do is assume that most of my signatures aren’t really going to be of any importance in the future, except for the ones I’ve recently made. My most-recent signatures were made for a PGP key signing I went to a week or so before this all happened, so I will just email those participants, letting them know that the signatures from my previous mails will likely fail validation.

That takes care of my regular signatures, but there’s one set left, and these are gonna be much harder to deal with: I have to figure out a way to re-sign all of my signed Git commits.

Ummmmm, What About Git?

This is the big problem. I have used my now-revoked key to sign tags and commits; in public repositories, and in Stanford-internal repositories. One of the internal Git repositories mandates that all commits be signed; that repository validates signatures against keys kept in a separate, server-local keyring.

The signed tags are easy enough to deal with: I simply re-create them, using my new key. The annoyance is that I have to go through each tag to find the ones that I have signed. Luckily, our Git servers allow overwriting existing tags, so this can be done.

The git commits are harder. With my now-revoked subkey, old signed commits now come with warnings. You can see this if you clone my syncrepl project and run git log --show-signature dfddd1a676cdea723fc077972e9588df0cd2730b:

The above output comes from Git running locally. If you look at the syncrepl project’s commits on GitHub, as of posting the commits are all showing Verified. That is because I have control over the GPG key I upload to GitHub, and I haven’t (yet) updated it to include my revoked sub-keys.

So, what can be done? The brute-force option would be to go through all of my old commits, identifying which ones were signed by me. I would then start a new branch off of the next-oldest commit (that is, the one right before my first signed commit). I would then start a long sequence of git cherry-pick operations. Each time I cherry-pick a commit that I signed, I would instead do a git cherry-pick -S to update the signature. In the end, I would have a new branch whose contents match the old branch, but whose signatures have been updated. My new branch would then take the name of the old branch, and I would force-push up to the server to make it so.

The brute-force method would work, but besides being extremely time-consuming and annoying, there are two other problems:

The force-push means everyone else using my repository would have to essentially git reset themselves onto my new branch head. Any commits others have made since then would have to be cherry-picked.

If anyone else in the sequence has signed commits, we would all have to work together in a carefully-coreographed sequence of cherry-pick-sign—push—wait—pull—cherry-pick-sign—push—et-cetera. This would have to be done even if other people’s signatures were fine: Because git cherry-pick makes a new commit object, the signature is lost unless the original signer uses git cherry-pick -S to sign the new commit.

Revocation Commit

The solution I devised involves two commits, and a separate out-of-band posting. Things start before I revoke my now-revoked sub-keys.

First, I make sure that my repository is completely up-to-date. If I had to pull anything, I make sure that none of the pulled commits use my soon-to-be-revoked sub-key. I then make a signed empty commit. An empty commit can be made by appending --allow-empty to your git commit line, in a Git repository where there is nothing ready to be committed. Git will prompt you for a commit message as normal, and then make the commit as normal.

Here is what my commit message looked like:

No longer using key 14A7B2A56335B8D5
This is an empty commit, in that no files are being changed. This
commit is just here to leave a message, which is that I am revoking my
current signing key:
Signature key ....: F0C1 EF27 14C5 0582 915C 59F8 14A7 B2A5 6335 B8D5
created ....: 2015-12-13 06:54:47
Today (Monday, October 16, 2017) will be the last day I sign anything
with the above key.
I am revoking this key because of CVE-2017-15361. My signing key was
generated by, and lives on, a Yubikey 4 that is affected by the
vulnerability described in the CVE. Once the details of the
vulnerability are out, it will be possible for others to get my
signing private key.
This affects all of my signed commits and tags. Although re-signing
the tags is possible (which I will do once I have a new key),
re-signing commits is not really possible, because that would cause
the commit ID to change, affecting the rest of the tree, and making it
really hard for other people.
So, I am leaving this note. By leaving this note, I make a new commit
ID. Once I have my new key, I will leave another note, which says
that this commit ID is valid. That makes kind of a chain of trust,
even though it's likely that Git will say the signature of this note
is invalid.
Not only that, but every commit after this one will make it harder for
someone to go back and change the note: The more commits there are
after this note, the more commit IDs will change if this node is
modified; and the more people who have clones of this repo, the bigger
the commit difference will be when they do a pull. Of course, it's
not perfect, but I think it's better than nothing!
Maybe there will be a future way to re-sign commits, in a way that
does not disrupt the repository.
For reference, here are all of my current tags, and their commit IDs:
TAG_NAME COMMIT_ID

The above commit was signed by my now-revoked key. I then put the following message into another non-empty commit:

Confirming commit dfddd1, and past signatures
This commit-which has been made with my new key-is to confirm that
commit dfddd1a676cdea723fc077972e9588df0cd2730b is valid, and was
signed by the following key:
Signature key ....: F0C1 EF27 14C5 0582 915C 59F8 14A7 B2A5 6335 B8D5
created ....: 2015-12-13 06:54:47
That commit's signature, and the signatures of the previous commits
which have been signed by the above key, should be trusted as much
as the signatures made with this key.

The commit ID referenced above is the ID of my revoked-signing-key commit. In this example, this second commit has ID 5204c0a.

As a third party, you start from a known point—the commit ID of the branch tip—and you begin walking back through the commits. In your trip back, you first come across my signed-and-currently-valid commit, 5204c0a. That commit says commit dfddd1a is also valid, even though it was signed by a now-revoked key. I also give the ID of the now-revoked sub-key, for future reference.

Eventually you reach commit dfddd1a. Git confirms that it is signed by a revoked sub-key, but the signature comes from the sub-key mentioned in the validly-signed commit 5204c0a. Commit dfddd1a lists the same sub-key ID, and explains the reason for the revocation.

At this point, as long as you trust my current sub-key (which signed commit 5204c0a), you should also trust commit dfddd1a. At that point, you can decide on the validity of my other commits:

Commits older than commit dfddd1a, and which were signed by my now-revoked sub-key, should still be OK, because you can trace a direct path back from the “validating commit” 5204c0a.

Commits newer than commit dfddd1a, and which were signed by my now-revoked sub-key, are to be treated with suspicion, because they were made after I explicitly said that I would stop using my now-revoked sub-key.

At this point, I can only think of two ways for someone with my revoked sub-key to get new commits into the repository:

Someone could have slipped in a signed commit before commit dfddd1a, and gotten that pushed to the Git server, before I did my pull. Or they could have gotten it onto my computer by some other means.

Someone could do a force-push, adding new (bad) commits; and replacing commit dfddd1a with a new commit, using the same text, and signed by my now-revoked key.

The defense against threat #1 is to do the “revocation commit” as soon as possible when it is known that the key is compromised. For extra safety, do a manual review of recent previous signed commits, and do a git fsck --strict to make sure your local copy is intact.

The defense against threat #2 is to have a separate out-of-band posting. This posting takes the form of a table, containing three items:

The Git repository in question.

The commit ID of my “revocation commit”.

The commit ID of the first commit with my new sub-key.

In my case, I have both public and private repositories, and I do not want to expose the names of the private repositories, so I am just including the commit IDs. Here is the table:

I also have the table available separately as a GitHub Gist, the idea being that having the same info around in multiple places will make it hardware for that info to be lost or modified. I’m also having this post picked up by the Internet Archive‘s Wayback Machine. The Gist is also signed by my new sub-key.

I think it is OK to leave off the repository identifier from the table, because commit IDs should be unique enough that the chance of the same commit ID appearing twice in one of the columns is very unlikely. It does make things harder to verify (you don’t know which row to check), but to be honest, it’s unlikely that you’ll need this table (though it’s still good to have!).

Future Alternatives

What I’ve done is, in my opinion, a good human-readable solution. I’m sure there are problems that I’ve missed, but I hope this provides at least some protection. Of course, this only works if a human actually reads it: programs using git verify-commit will continue to complain about commits made with my revoked sub-key.

I already discussed the brute-force solution, where a careful dance is performed to make a new sequence of re-signed commits. But I wonder if—in the future—there could be another way, one that doesn’t involve rewriting history?

The problem to be addressed is this: You need to be able to sign objects that already exist, without rewriting the object (because changing the object will likely change the history). My suggestion is that there be a new object type, like a detached signature, and a new file free under .git or .git/refs.

The signed object includes the name of the tag (which should match the filename in .git/refs/tags), the identity of the signer (which should match the signature), and the type and ID of the tagged commit. Let’s use that as a template to make a new object, to represent an additional, “detached” signature on an existing object:

Again we have the type and ID of the tagged commit, but we now explicitly include the ID of the signing sub-key (which should match the signature). And we still have the identity and time that the signature was made. All of these pieces help to ensure that we get a unique object hash.

Now that we have an object hash, where do we put it? My thought is to have a new directory—either in .git or in .git/refs—called signatures. The directory would be structured similar to the object directory: The first level would be two-character hash prefixes (00 through ff). The second level would be object hashes: If you have an object with multiple signatures, the second level will have a directory whose name is the hash. Finally, on the third level, you would have files: The file name would be the ID of the signing key or sub-key; the contents would be the object ID of the detached signature object (which I described above).

Checking for detached signatures would involve a check of the signatures directory. If detached signatures were found, then Git would be able to evaluate all of the signatures, and make a decision about the validity of the commit.

So, what do you think? It seems to me that signed commits aren’t used very much right now, but the functionality exists, and I think my experience exposes a weakness in how signed commits are implemented. I want to keep the history, but keys either age out (through loss or expiration) or are revoked; there has to be a way of dealing with commits that are good, but whose signing keys have been revoked some time after the signature was made.

I’ve done the best I can with my weird, “empty” commits, and I hope a way is found to implement something appropriate!

As part of my job, I support various labs (and other users) on campus. My work includes hardware maintenance, system administration, and software development. One of the labs on campus (the Quake Lab) asked me to automate part of their DNA post-sequencing demultiplexing and delivery process. I did that, but I did it in a fairly non-portable way: The code works with multiple sequencers, and it can be moved to other Linux systems, but it has a number of annoying dependencies.

As time goes on, I know that I will need to move this software to other hardware. With multiple external dependencies—a third-party package, a few things from EPEL, and some other code—this is not an easy process. However, there is a solution! Containers!!

However however, this code needs to be run by (and as) regular people, and it needs to interact with the local filesystem. Docker might not be the best option here. Instead, I am going to use Singularity for my containers.

This document describes how to take an existing workflow and build it as a Singularity container, so that it may be easily moved between systems.

This assumes that you’re running macOS 10.9 or later, and that you have admin access on your system.

NOTE: Some of these commands have you running them as root, via sudo. Only use sudo for these specific commands! For example, you need sudo when you are installing stuff, but you do not need sudo for day-to-day things (like using Xcode, or using the software that you install).

Install Xcode from the App store.
Although it’s possible to download the installer package, it’s easier to just install it from the App Store. The only reason I’d install the package directly is if I was using some sort of system-management platform to push out software.

Install the Command Line Tools that match your version of Xcode and macOS. The download will be named “Command Line Tools (OS X 10.???) for Xcode ???”. Make sure you install the correct command-line tools for your version of Mac OS X and your version of Xcode.
Unfortunately, there is no App Store entry for this.

The MacPorts installer updates your PATH, but for some reason it doesn’t update your MANPATH. Add the following line to your .bash_profile:

export MANPATH="/opt/local/share/man:$MANPATH"

If you use Eclipse, which ships its own Git implementation, you might want to ensure that it uses the OpenSSH you’ve installed, so add this line to your .bash_profile:

export GIT_SSH=/opt/local/bin/ssh

Copy and customize one of the SSH configs from my Stanford web space. Even though the MacPorts version of OpenSSH is used, it’s not acting as the system ssh daemon, so you need to match that with the SSH config you download. In other words, if you’re running Mac OS X El Capitan, that’s the file you should download. Name the file config, and put it into the .ssh folder in your home directory.

Open the config file you just downloaded, and make some changes:

Look for the line “Host *.stanford.edu“, and change that to cover your own company’s domain, or to cover the systems that you use.

Look for the ProxyCommand line below “Host *.stanford.edu“, and change it to use your own group’s bastion host. If you don’t have a group bastion host, and you want all connections to go direct, then comment out the ProxyCommand line.

Somewhere above the “Host *.stanford.edu” line, insert a new section, named “Host YOUR_BASTION_HOST_HERE“. In that section, add the configuration line “ProxyCommand none“. This will tell SSH not to use the proxy command when connecting to your bastion host. If you don’t do this, then every SSH connection, including connections to the bastion host, will cause a loop. Also add the line “DynamicForward localhost:1080“, to open a local SOCKS proxy on the bastion host. This is useful if you need to proxy other traffic (like HTTP traffic) through your bastion host, or if you will be using proxychains.

Pull in BASH customizations:

Copy all of my BASH files to your home directory, and then add a dot to the start of each filename.

Make a symlink from .bash_stanford to either .bash_stanford_mit (if using MacPorts’ kerberos5 package) or .bash_stanford_heimdal (if using macOS’ built-in Kerberos).

Open .bash_stanford with a text editor, and change the “BASTION=” line to use the name of your group’s bastion host. Note that this is just the hostname. If you are using a different domain name, then you’ll need to go through the entire shell script, making changes wherever you see “stanford.edu”. If you bastion host uses something other than port 44, change the “bastion” alias at the bottom of the script. You should also change (or remove) the command that is run on your bastion host automatically.

Finally, update your .bash_profile file to run the .bashrc:

. ~/.bashrc

Close and re-open all Terminal windows, so that they pick up the changes made to your profile scripts. To test out your bastion host connection, run the command bastion.

To set up proxychains, create a file at ~/.proxychains/proxychains.conf, with the following content:

If you do a lot of Perl work, consider installing Perlbrew. But, if you do, be sure to read my warning! Also, consider if you want to enable Perlbrew in your .bash_profilebefore or after you set the PATH for MacPorts. MacPorts can also install Perl, and Perl packages, so if you aren’t careful you’ll get weird clashing between your Perl and MacPorts’ Perl.

If you do anything with PGP/GPG, including signing, encrypting, or authenticating, then install the GPG Suite. In my case, I’m doing package signing (Debian packages) using a hardware key (on a Yubikey 4), so I need the functionality that the GPG Suite provides.

That’s it! Most of the software (like Kerberos and OpenSSH) is documented elsewhere. Here are my specific usage notes for other stuff.

To switch which Kerberos principal you’re using, use the aliases pag (to switch to regular principal), rootpag (root principal), adminpag (admin principal), or sunetpag (sunet principal). The principal type appears in the command prompt, or the message (nc) to indicate that you don’t have a principal right now.

Unfortunately, I’ve noticed a tendancy for the OS to switch prinicpals in the background unexpectedly. Send a noop command (that is, just press enter at shell prompt) to see if your active principal has changed.

My bash profile also provides a number of aliases to help with regular SSH stuff, including:

“sshs HOSTNAME” will run “ssh -l YOUR_USERNAME HOSTNAME.stanford.edu“

“sshr HOSTNAME” will run “ssh -l root HOSTNAME.stanford.edu“. Make sure you’re using your root principal before you use this alias!

Using proxychains is as simple as prefixing your command with proxychains. For example:

proxychains remctl my-server command subcommand arg1

That routes all network connections through the SOCKS5 proxy at localhost port 1080, which should be set up by the DynamicForward line in your SSH configuration. However, that also means proxychains will only work when you have a SSH connection to the bastion host!

Here’s yet another thing that hit me at work today, and getting the answer involved annoying searching & testing, so here it is for you!

I use a Debian jessie workstation, and my SSH key lives on a Yubikey. We don’t really use pubkey-based authentication in my area (we use Kerberos), so the first time I needed to do a pubkey-authenticated SSH connection was to clone one of my Github repositories. When doing the clone, I got the mysterious message “Agent admitted failure to sign”. GitHub talks about this error, but their article only talks about locally-stored keys (like id_rsa keys). My private key lives on the Yubikey, so their article didn’t really work for me.

Doing some investigation, I could see that my SSH client was trying to authenticate, and it did see my Yubikey, but still the authentication was failing. If you have this problem, there are two important things to check:

Your GPG Agent must be running with SSH support.

Your authentication key’s keygrip needs to be loaded in the .gnupg/sshcontrol file.

My problem was actually #2, but #1 is important to mention!

SSH Support in GPG Agent

The GPG Agent is able to act as an SSH Agent, speaking the ssh-agent protocol, as long as it has been told to do so. In order for this to work, you need to specify –enable-ssh-support in the gpg-agent command line, or add “enable-ssh-support” in ~/.gnupg/gpg-agent.conf.

To confirm this, check your gpg-agent, and check for the appropriate environment variables:

SSH_AGENT_PID contains the process ID of the ssh-agent, which is actually gpg-agent. Also, SSH_AUTH_SOCK points to a special socket that gpg-agent creates, meant for connections speaking the ssh-agent protocol.

If SSH_AGENT_PID is pointing to a different process, you may have ssh-agent running somewhere else. You should find out what is running ssh-agent, and stop it. gpg-agent is able to manage locally-stored keys (like id_rsa), so I don’t think there’s a need to run ssh-agent, as long as you are running gpg-agent with SSH support.

If SSH_AGENT_PID isn’t specified at all, or you just added “enable-ssh-support” to ~/.gnupg/gpg-agent.conf, then you’ll need to restart gpg-agent. In many cases, it’s easier to log out and log back in, so that everything picks up the new gpg-agent.

The sshcontrol File

This is what affected me.

The file at path ~/.gnupg/sshcontrol contains a list of “keygrips” for SSH that gpg-agent will use for ssh-agent authentication. Even if gpg-agent is configured and can see your key, it will not work unless it’s listed in the sshcontrol file! For local keys, running ssh-add will automatically add them to the sshcontrol file, but that doesn’t work for keys that live on an OpenPGP card.

I had gpg-agent running with SSH support, but gpg-agent does not automatically add keys that are already on an OpenPGP card, so it’s up to you. What you need to do is…

Extract the keygrip.

Add the keygrip to the sshcontrol file.

Restart gpg-agent.

Extracking the keygrip is pretty easy to do, using this string of commands:

Each line of output contains the keygrip for one of the three keys on your OpenPGP card: The encryption key, the signing key, and the authentication key (which is the one we want). The third line, for “OPENPGP.3”, contains the keygrip for the authentication key.

To update the sshcontrol file, simply add the keygrip to the file, on its own line. So, in my case I would add the following line to ~/.gnupg/sshcontrol:

E1846F4F5B1D6ABF537BC738CC3E840106457F13

After you update the sshcontrol file, you need to restart gpg-agent. Again, the safest way of doing that is to log out and log back in, so that everything picks up the new gpg-agent environment variables.

Over the last week, we’ve been having a problem with spam in our shared web service: Something was sending out lots of low-quality, easily-blockable spam, and the bouncebacks were filling up the Postfix queues in our outgoing email cluster. The way we tracked down the spammer was interesting, so I’m writing it up here in case it’s of interest to anyone else!

I’m right now experiencing the joys of setting up a Perl development environment on Mac OS X 10.11 (El Capitan). I’ve already talked about the weird linker warnings that appear when building Perl, and I’ve just come across another fun roadblock: A lack of OpenSSL header files. This is not unintentional, and there is a solution.

I just recently got a Mac laptop with Mac OS X 10.11 (El Capitan) installed, and one of the things I do in a new system is install a local Perl environment using perlbrew. It allows me to install and upgrade modules without worrying that I am getting in the way of the system environment. Problem is, when I built my first perlbrew environment on El Capitan, I saw some weird stuff happening in the build.