News

More Fun with tlsrestrict_nss_tool on Windows

May 21, 2018 • Jeremy Rand

Last episode: When we last left our hero, tlsrestrict_nss_tool had a few unfixed bugs that made it unusable on Windows. Everyone believed those bugs would be the final ones. Were they? And now, the conclusion to our 2-part special:

Spoiler alert: no, of course they weren’t the final bugs! Obviously Murphy needs to keep showing up, otherwise life as an engineer would be boring, right?

Anyway, so I fixed the 3 known bugs:

Use of cp

No warning when CKBI is empty.

Broken Unicode in nicknames.

And at this point, things almost worked. Specifically, I could apply a name constraint that blacklisted .org, and accordingly the Namecoin website showed an error, while Technoethical still worked. Seems good enough, right? I certainly thought so, at least enough to announce on #namecoin-dev that I believed it was working. Except then I had that insane urge to try to torture-test it a bit more. So I ran tlsrestrict_nss_tool a 2nd time against the same NSS DB and the same CKBI library. The expected behavior is that it will examine the CKBI and NSS DB, and decide that no additional cross-signing is needed. Unfortunately, I instead was treated to a fatal error due to an ASN.1 parse error, specifically due to trailing data.

I’ve seen this error before, and it’s usually triggered by an NSS quirk. NSS doesn’t actually keep track of each certificate uniquely. If you put 2 certificates in an NSS DB that have the same Subject, and you ask certutil to give you one of them (doesn’t matter which), certutil will actually give you both of them, concatenated. This happens regularly in our usage, because the cross-signed CA and the (distrusted) original CA have the same Subject (by design).

Further examination of the logs showed that the errors were showing up while trying to handle certs that had a very odd characteristic: their names looked like what you would get from concatenating the Namecoin prefix with an empty string instead of with the name of the certificate. Given that I had just spent time fixing issues with Unicode encoding of certificate names, this seemed to be a likely culprit.

So, I made 2 (overdue anyway) changes to the codebase:

Switch from DER to PEM encoding when communicating with certutil. DER doesn’t have boundaries when you concatenate certificates, while PEM does. Using PEM should make debugging a lot easier when multiple certs show up with the same name.

When dumping a PEM cert from the NSS DB, explicitly check for multiple PEM certs, and if more than one is present, try to guess which one is correct by checking for the Namecoin prefix in its Subject CommonName and Issuer CommonName (this will be unambiguous under typical conditions). If more than one cert is present that matches the expected prefixes, throw an error and log all of the PEM certs that showed up in the dump.

So, with those changes added, I ran it again, and quickly got an error telling me that 9 certs were being returned in a single dump. How odd. Conveniently, the log told me what certificate it was trying to dump when this happened: “Namecoin Restricted CKBI Intermediate CA for ePKI Root Certification Authority”. This didn’t look like a Unicode issue at all – that name is entirely Latin. So I Googled for “ePKI Root Certification Authority”, and quickly facepalmed. That root CA doesn’t have a CommonName! Suddenly the symptoms made sense. The root and intermediate CA’s that are created by cross_sign_name_constraint_tool prepend a Namecoin string to the CommonName of the input CA and discard the rest of the input CA’s Subject, meaning that if multiple input CA’s have a blank CommonName, their resulting Namecoin root and intermediate CA’s will end up with colliding Subjects. Fail.

The fix, of course, is to append the SHA256 fingerprint of the input CA to the Subject CommonName of the root and intermediate Namecoin CA’s. This ensures that we’ll get a unique Subject per input certificate.

And now, it works. Repeated runs of tlsrestrict_nss_tool work as they should. Kind of irritating to spend so much time chasing a silly fail like that, but on the bright side the switch to PEM resulted in cleaner code.

Next, I’ll be integrating tlsrestrict_nss_tool into ncdns. Hopefully this will expose any remaining weirdness.

This work was funded by NLnet Foundation’s Internet Hardening Fund.

Testing tlsrestrict_nss_tool on Windows

Besides the obvious and rather boring fail that tlsrestrict_nss_tool was trying to execute cp, which of course isn’t going to work on Windows (that particular code segment is a relic from quick prototyping that wasn’t ever intended to stay in the codebase), two more interesting issues were identified:

rbm builds certutil with the Visual C++ 2010 runtime, so running certutil without that runtime installed produces an obvious error. However, in order to properly detect the built-in certificates (“CKBI”) that Firefox ships with, tlsrestrict_nss_tool makes certutil load the CKBI module that Firefox distributes (not the CKBI module that certutil was built with). This means that, when certutil is asked to load Firefox’s CKBI module, the Visual C++ runtime used by Firefox’s CKBI module also needs to be present. Which happens to be Visual C++ 2015. Without that, certutil looks like it’s working – but the moment the Firefox CKBI module is loaded into certutil, certutil exits with a missing DLL error. However, the situation is worsened by the fact that, as far as I can tell, a missing DLL error in Windows doesn’t impact the exit code. So tlsrestrict_nss_tool doesn’t actually know thet certutil encountered an error; it just thinks it succeeded, and happened to produce no output. What happens if certutil produces no output when dumping the CKBI list? Well, tlsrestrict_nss_tool just figures that you’re using a Firefox build that doesn’t have any default trusted CA’s! This is bad enough when you first run tlsrestrict_nss_tool, since it will basically be a no-op. But even worse, if you did have the Visual C++ dependency from Firefox, but then Firefox upgraded it, then the next time you try to run tlsrestrict_nss_tool, all of the name constraints that were previously added will get deleted, because tlsrestrict_nss_tool figures that those CA’s have vanished. How sad. The fix here is probably to make tlsrestrict_nss_tool explicitly error if the CKBI module appears to have 0 certificates in it. Such a scenario pretty much always indicates that something has gone horribly wrong involving the CKBI module, and it’s generally best to treat it as an error.

certutil’s certificate dumping functions require selecting a certificate by its nickname. What’s a nickname? In practice, for the CKBI module it seems to be the CommonName of the certificate. The nickname is passed to certutil via a command line flag. What could possibly go wrong here? Certificate nicknames can be arbitrary text, including Unicode. What happens when you pass Unicode as a command line argument in Windows? Nothing good happens, that’s for sure. In my testing, Windows will corrupt all of the non-ASCII characters, which results in certutil receiving a corrupted nickname to look up (and it correctly replies that no such nickname exists in the database). The fix here is to use certutil’s “batch command” feature. certutil allows you to put a sequence of commands into a .txt file, and you can pass that .txt file’s path to certutil with a command line flag; certutil will then run all of those commands. Since the .txt file isn’t parsed by Windows’s broken command line text decoder, Unicode inside the .txt file passes through unharmed.

Now, I haven’t actually fixed these bugs yet. But, progress is progress. Hopefully fixes will be coming very soon.

This work was funded by NLnet Foundation’s Internet Hardening Fund.

Reproducible Builds of NSS certutil via Cross-Compiling with rbm

May 17, 2018 • Jeremy Rand

In a previous post where I introduced tlsrestrict_nss_tool, I mentioned that NSS’s certutil doesn’t have official binaries for Windows, and that “At some point, we’ll probably need to start cross-compiling NSS ourselves, although I admit I’m not sure I’m going to enjoy that.” Well, we’ve reached that point, and it was an interesting adventure.

Initially, I looked at the NSS build docs themselves, and was rather annoyed to find that there’s no documentation about how to cross-compile NSS. To make matters worse, the only results I could find by Startpaging were people saying that they couldn’t figure out how to cross-compile NSS (including some well-known software projects’ developers).

However, it just so happens that there’s a very high-profile free software project whom I was certain is definitely cross-compiling NSS: The Tor Project. Tor cross-compiles Firefox (including NSS) as part of their Tor Browser build scripts, so it seemed near-certain that their build scripts could be modified to build certutil. However, Tor’s build scripts are rather nonstandard, since they build everything with rbm. rbm is a container-based build system that’s superficially similar to the Gitian build system that Bitcoin Core and Namecoin Core use (indeed, The Tor Project used to use Gitian before they migrated to rbm). I’ve been intending to get my feet wet with rbm for quite a while now, so this seemed like a great excuse to play with rbm a bit.

First off, I wanted to build Firefox in rbm without any changes. This was actually quite easy – The Tor Project’s documentation is quite good, and I didn’t run into any snags (besides the issue that I initially assigned too little storage to the VM where I was doing this – The Tor Project should probably document the expected storage requirements). The build command I used was:

Next, I looked at Tor’s Firefox build script… and I was delighted to see that Tor is already building certutil. In fact, you can download certutil binaries from The Tor Project’s download server right now! (You want the mar-tools-*.zip packages.) Except… their build script discards the certutil binaries for all non-GNU/Linux targets. How sad.

Modifying the build script to also output certutil.exe for Windows was reasonably straightforward – rbm even worked without erroring on the first try. I did, however, need to try for a few iterations to make sure that I was outputting all of the needed .dll files. However, once I had all the required .dll files, a rather odd symptom occurred when I tested it on a Windows machine. When I ran certutil.exe from a command prompt, it would immediately exit without printing anything. Stranger, when I double-clicked certutil.exe in Windows Explorer, it didn’t even pop up with a command prompt window before it exited. In addition, I noticed that if I passed command line arguments to certutil.exe telling it to create a new database, it actually did create the database – but it still didn’t display any output.

This seemed to indicate that something was wrong not with certutil.exe’s actual functionality, but with its PE metadata: Windows was probably treating it as a GUI application rather than a console application. Checking the PE metadata confirmed this: Tor’s build scripts were producing a certutil.exe whose PE metadata was marking it as a GUI application. Some more quick searching revealed a StackOverflow post providing a short Python2 script that could edit that part of the PE metadata. I ran that script against certutil.exe… and now certutil.exe works properly. Yay!

The lack of certutil.exe binaries was one of the major blockers for releasing negative TLS certificate overrides for Firefox on Windows. Now that this barrier is behind us, I can get around to testing tlsrestrict_nss_tool on Windows, and hopefully do a release (with NSIS installer support). And as a side bonus, certutil.exe builds reproducibly, and I’ve now gotten some experience with rbm (meaning that reproducible builds for ncdns and our other Go software may be coming soon).

This work was funded by NLnet Foundation’s Internet Hardening Fund.

Integrating ncdumpzone’s Firefox TLS Mode into ncdns

May 14, 2018 • Jeremy Rand

I discussed in a previous post some experimental work on making ncdumpzone output a Firefox certificate override list. At that time, the procedure wasn’t exactly user-friendly: you’d need to run ncdumpzone from a terminal, redirect the output to a file, close Firefox, delete whatever existing .bit entries existed in the existing Firefox certificate override file, append the ncdumpzone output to that file, and relaunch Firefox. I’ve now integrated some code into ncdns that can automate this procedure.

One of the trickier components of this was detecting whether Firefox was open. Firefox’s documentation claims that it uses a lockfile, but as far as I can tell Firefox doesn’t actually delete its lockfile when it exits (and I’ve seen similar reports from other people). Eventually, I decided to just watch the contents of my Firefox profile directory (sorted by Last Modified date) as Firefox opened and closed, and I noticed that Firefox’s sqlite databases produce some temporary files (specifically, files with the .sqlite-wal and .sqlite-shm extension) that are only present when Firefox is open. So that’s a decent hack to detect that Firefox is open.

Given that, ncdns now creates 2 extra threads: watchZone and watchProfile. watchZone dumps the .bit zone with ncdumpzone every 10 minutes, and makes that data available to watchProfile. (Right now, ncdumpzone is called as a separate process, which isn’t exactly ideal – a future revision will probably refactor ncdumpzone into a library so that we can avoid this inefficiency.) watchProfile waits for Firefox to exit (it checks at 1 Hz), and then loads Firefox’s cert_override.txt into memory, removes any existing .bit lines, appends the data from watchZone, and writes the result back to Firefox’s cert_override.txt.

These 2 new threads in ncdns are deliberately designed to kill ncdns if they encounter any unexpected errors. This is because, if we stop syncing the .bit zone to the Firefox override list, Firefox will continue trusting .bit certs that might be revoked in Namecoin. Therefore, it is important that, in such a situation, .bit domains must stop resolving until the issue is corrected. Forcing ncdns to exit seems to be the least complex way to reliably achieve this.

These changes significantly improve the UX of positive TLS certificate overrides for Firefox. A pull request to ncdns should be coming soon.

This work was funded by NLnet Foundation’s Internet Hardening Fund.

Fixing DNAME records in madns and dns-prop279

May 12, 2018 • Jeremy Rand

One of the more obscure DNS record types is DNAME (AKA the Namecoin "translate" JSON field), which is basically a DNS redirect for an entire subtree. For example, currently radio.bit. has a DNAME record pointing to biteater.dtdns.net., which means that any subdomain (e.g. batman.radio.bit.) becomes a CNAME redirect (e.g. to batman.biteater.dtdns.net.).

DNAME is not exactly a favorite of mine in the context of Namecoin, because it’s easy to misuse it in a way that assigns trust for a Namecoin domain name to 3rd party keys whom Namecoin is intended to not trust (e.g. if you DNAMEradio.bit. to a DNS domain name, you’re also assigning control of the TLSA records for _443._tcp.radio.bit. to whatever DNSSEC keys have the ability to sign for that DNS domain name, which probably includes a DNS registrar, a DNS registry, and the ICANN root key). That said, DNAME is part of the DNS, and so it should work in Namecoin, even though there aren’t likely to be many good uses for it in Namecoin.

Which is why I was surprised to notice when I tested DNAME today that it wasn’t actually working as intended in ncdns or dns-prop279. Some digging revealed that madns (the authoritative DNS server library that ncdns utilizes) didn’t actually have any DNAME support; the place in the code where it should have gone was just marked “TODO”. This was a great excuse for me to get my feet wet with the madns codebase (Hugo usually handles that code), so I jumped in.

In the process of adding DNAME support to madns, I got to read RFC 6672, and noticed that it very much looks like Namecoin’s d/ (domain names JSON) spec is not quite compliant with the RFC. Specifically, the Namecoin spec says that a DNAME at radio.bit. suppresses all other records at radio.bit., whereas the RFC says that other record types can coexist at radio.bit., with the sole exception of CNAME records. I’ve filed a bug to get the Namecoin spec brought in line with the RFC.

Once I got madns supporting DNAME properly, that meant I could test dns-prop279 with DNAME. Except testing quickly showed that dns-prop279 was crashing when it encountered a DNAME. A quick check of the stack trace showed that I had made a minor screw-up in the error checking in dns-prop279 (specifically, dns-prop279 is asking for a CNAME, but doesn’t properly handle the case where it receives both a DNAME and a CNAME). A quick bugfix later, and dns-prop279 was correctly handling DNAME.

The fixes are expected to be included in the next release of ncdns and dns-prop279.

This work was funded by NLnet Foundation’s Internet Hardening Fund.

(Side note: some readers might have noticed that I was posting less frequently over the past month or so. That’s because my master’s thesis defense was on May 3, and as a result I spent most of the last month getting ready for that. I passed my defense, so things should be back to normal soon.)

cross_sign_name_constraint_tool v0.0.2 and tlsrestrict_nss_tool v0.0.2 Released

certinject: Add support for NSS trust stores; this enables positive TLS overrides in Chromium on GNU/Linux (and probably various other software that uses NSS for certificate validation. (Patch by Jeremy Rand.)

ConsensusJ-Namecoin v0.2.7 Binaries Available

Apr 3, 2018 • Jeremy Rand

Binaries of ConsensusJ-Namecoin (the Namecoin lightweight SPV lookup client) v0.2.7 are now released on the Beta Downloads page page. This is based on the source code that was released earlier. Notable new things in this release:

leveldbtxcache mode is merged to upstream libdohj, and has therefore had the benefit of more peer review.

leveldbtxcache mode now only stores the name scriptPubKey, not the entire transaction. This significantly improves syncup time and storage usage. (Currently, it uses around 65 MB of storage, which includes both the name database and the block headers.)

Many dependency version bumps.

As usual, ConsensusJ-Namecoin is experimental. Namecoin Core is still substantially more secure against most threat models.

Integrating Cross-Signing with Name Constraints into NSS

Mar 26, 2018 • Jeremy Rand

At the end of my previous post about porting cross-signing with name constraints to Go, I mentioned that the next phase was to automate the procedure of applying the constraints to all root CA’s in NSS, instead of needing to manually dump CA’s one-by-one from NSS, run them through my Go tool (currently named cross_sign_name_constraint_tool, because I’ve exhausted my witty software naming quota on another project[1]), and import them back into NSS. I’m happy to report that this next phase is essentially complete, and in my testing I blacklisted certificates for the .org TLD regardless of which built-in root CA they chained to (without any impact on other TLD’s).

My previous post went into quite a bit of technical detail (more so than a typical post of mine), mainly because the details of getting Go to cross-sign with name constraints with minimal attack surface were actually rather illuminating. In contrast, most of the technical details I could provide for this phase are rather boring (in my opinion, at least), so this post will be more high-level and somewhat shorter than the previous post. (And no code snippets this time!)

Early on, I had to make a design decision about how to interact with NSS. There were 3 main options available:

Pipe data through certutil.

Link with the NSS shared libraries via cgo.

Do something weird with sqlite. (This category actually includes a wide variety of strange things, including using sqlite’s command line utility, using sqlite with cgo, and using horrifying LD_PRELOAD hooks to intercept NSS’s interaction with sqlite.)

Given that I haven’t used sqlite in several years, and that I’ve never actually used cgo, but I use certutil on a daily (if not hourly) basis these days, it was pretty clear that option 1 was going to be the most effective usage of my development time. And it actually came together surprisingly fast, into a command-line tool that I call tlsrestrict_nss_tool (note the naming similarity to tlsrestrict_chromium_tool – the functionality is analogous). A few of the more “interesting” things I ended up dealing with:

Using certutil to retrieve a list of all certificate nicknames in a database looks like it returns an error. I say “looks like” because there’s actually no error. The standard output contains all the data I asked for, and the standard error is empty. But for some reason certutil returns exit code 1 (indicating an error), not exit code 0 (indicating success), for this particular operation. The exit codes for other operations in certutil don’t exhibit this issue. I ended up just having my code treat exit code 1 as exit code 0 for this particular operation, which seems to work okay.

When an NSS database contains 2 different certificates that contain the same Subject and Public Key, NSS actually can’t keep track of which is which. The metadata stays consistent, but when you ask for the DER-encoded certificate data for 1 of the certificates, NSS decides to give you both of them. Concatenated together. This led to me writing some generally horrifying code that tries to check for the presence of a certificate by doing both a prefix and suffix match against a byte slice (since I don’t have any idea what order the certificates will be concatenated in). It’s probably somewhat safer to change tlsrestrict_nss_tool to use PEM encoding rather than DER, since it’s easier to detect boundaries of concatenated PEM blocks. I’ll probably do this next time I’m cleaning up the code.

I accidentally applied a name constraint excluding .bit instead of .org the first time it ran successfully, and while trying to undo my mistake, I realized I hadn’t ever considered how to uninstall all of these extra certificates. Back when I was just dealing with a single CA, it was easy to uninstall them via certutil by hand, but at this scale it’s not feasible to do that. So I ended up adding an extra uninstall mode. It turned out to be relatively straightforward – apparently my design was flexible enough that this functionality wasn’t hard to add, even though I had never explicitly thought about how I would do it. Whew!

The big one. Applying the name constraint to the entire NSS built-in certificate list (starting with a mostly-stock database) took 6 minutes and 48 seconds. I strongly suspect that most of this overhead is because NSS doesn’t support sqlite batching, so for every certificate that gets cross-signed, something like 7 sqlite transactions are issued. On the bright side, my code is smart enough to not attempt to cross-sign certificates for which an existing cross-signature is already present, so the 2nd time you run it, it only takes 20 seconds (which is mostly spent dumping the existing certificates in order to verify that no changes are needed). Of course, if the trust bits get changed in the built-in list (or if the DER encoded value of a built-in certificate changes), the old cross-signature will be removed, and a new cross-signature will be added. (Technically there are probably some interesting race conditions here, and properly fixing that is on my to-do list.)

Anyway, once the 6 minutes and 48 seconds to run tlsrestrict_nss_tool had elapsed, I launched Firefox (this was being done with a Firefox NSS sqlite database on Fedora), and was pleased to see as soon as Firefox booted, I immediately got a certificate error – Firefox’s home page was set to https://start.fedoraproject.org/, and .org was the excluded domain in the name constraint that I configured for the test. I tested various .org and .com websites with a variety of root CA’s, and the result was consistent: all .org sites showed a certificate error, while .com websites worked fine. For example, when I looked at the certificate chain for StartPage, Firefox reported that the trust anchor was named Namecoin Restricted CKBI Root CA for COMODO RSA Certification Authority, indicating that the name constraints had indeed taken effect.

I think the code is now at the point where I’ll soon be pushing it to GitHub, and maybe doing some binary releases for people who want to brick their NSS database and lose their client certificate private keys try it out in a VM and report how it works. All that said, a few interesting caveats remain:

tlsrestrict_nss_tool only applies name constraints to certificates from Mozilla’s CKBI (built-in certificates) module. If you’re in the business of manually importing extra root CA’s, I’m currently assuming that one of the following is true:

You’re capable of using cross_sign_name_constraint_tool to manually add the name constraint before you import a root CA.

You’ve read this warning and ignored it, and therefore when you get pwned by Iranian intelligence agencies, I’m not responsible.

tlsrestrict_nss_tool has the side effect of making trust anchors from the CKBI module no longer appear to be from the CKBI module. Why does this matter? Many key pinning implementations only enforce key pins against CKBI trust anchors. (This is actually the trick we were abusing innovatively utilizing with tlsrestrict_chromium_tool, but now it’s working against us rather than in our favor.) Mitigating factors:

Chromium-based browsers are scrapping HPKP soon, so if your security model is dependent on HPKP working in Chromium, you might want to re-evaluate soon.

Chromium’s HPKP was already completely broken on Fedora due to a Chromium bug. It turns out that p11-kit, which Fedora uses as a drop-in replacement for CKBI, utilizes an NSS feature to indicate that it should be treated as CKBI, but Chromium didn’t use that NSS feature properly, and the Chromium devs had minimal interest in fixing it. Chromium’s devs explained this decision by saying that HPKP is a best-effort feature, and that HPKP failure is not considered a security issue in Chromium. (The bug was eventually fixed on December 28, 2017, approximately 9 months after it was reported to Chromium.) So again, if your security model is dependent on HPKP working in Chromium, you might want to re-evaluate, because the Chromium devs don’t agree with you.

Firefox’s HPKP can be optionally configured via about:config to enforce even for non-CKBI trust anchors. If you’re not deliberately intercepting your own traffic, you probably should enable this mode.

It’s arguably an NSS certutil bug that the CKBI-emulating flag that p11-kit uses can’t be read/set by certutil. Mozilla should probably fix that sometime.

Once I port tlsrestrict_nss_tool to tlsrestrict_p11_tool or something like that (i.e. port from NSS to p11-kit), it should be straightforward to mimic CKBI on Fedora, in the same way that p11-kit’s default CA’s mimic CKBI. This should at least be recognized by Firefox (but not by Chromium, see above).

Using tlsrestrict_nss_tool requires that you have a certutil binary. This is easily obtainable in most GNU/Linux distributions (in Fedora, it’s the nss-tools package), but I have no idea how easy it is to get a certutil binary on Windows and macOS. (No, the certutil program that Windows includes as part of CryptoAPI is not the same thing.) Mozilla doesn’t distribute certutil binaries. At some point, we’ll probably need to start cross-compiling NSS ourselves, although I admit I’m not sure I’m going to enjoy that.

tlsrestrict_nss_tool only works for NSS applications that actually use a certificate database. Notably, Tor Browser doesn’t use a certificate database, because such a feature forms a fingerprinting risk. (To my knowledge, Tor Browser exclusively uses the NSS CKBI module.) Long term, we could probably add a bunch of attack surface work around this issue by replacing Tor Browser’s CKBI module with p11-kit’s drop-in replacement. p11-kit is read-only, so in theory it can’t be used as a cookie like NSS’s certificate database can. But if you customize your CKBI module’s behavior in any significant way, you’re definitely altering your browser fingerprint. Assuming that all Namecoin users of Tor Browser do this the same way, it’s not really a problem, since the ability to access .bit domains will already alter your browser fingerprint, and replacing CKBI with p11-kit shouldn’t cause any extra anonymity set partitioning beyond that. But it’s definitely not something that should be done lightly.

Right now, tlsrestrict_nss_tool only supports sqlite databases in NSS. The older BerkeleyDB databases might be possible to support in the future, but since everything is moving toward sqlite anyway, adding BerkeleyDB support is not exactly high on my priority list.

Despite these minor caveats, this is an excellent step forward for Namecoin TLS on a variety of applications and OS’s.

This work was funded by NLnet Foundation’s Internet Hardening Fund.

[1] I might have written a program late last year for my master’s thesis, given it a name that is simultaneously (1) an obscure Harry Potter joke, (2) an anonymity technology joke, and (3) a Latin and Greek joke, and then devoted about 2-3 pages of my master’s thesis to explaining and elaborating on the compound joke. I probably didn’t do that though; that might constitute trolling my university, and I certainly wouldn’t do that, would I?

Cross-Signing with Name Constraints Ported to Go

Mar 25, 2018 • Jeremy Rand

In my previous post about achieving negative certificate overrides using cross-signing and name constraints, I discussed how the technique could make Namecoin-authenticated TLS possible in any TLS application that uses p11-kit or NSS for its certificate storage. However, the proof-of-concept implementation I discussed in that post was definitely not a secure implementation (nor was the code sane to look at), due to the usage of OpenSSL’s command line utility (glued to itself with Bash) for performing the cross-signing. I’m happy to report that I’ve ported the OpenSSL+Bash-based code to Go.

Creating a self-signed root CA in Go is relatively easy (there’s example code for it in the Go standard library, which happens to be the code on which Namecoin’s generate_nmc_cert tool is heavily based). Signing an existing certificate with that root CA is also pretty straightforward. The standard library’s x509 package in Go 1.9 and higher supports creating a CA with the name constraint features we want (earlier versions of Go’s x509 package only supported a whitelist of domain names; we want a blacklist, which is newly added in 1.9). Right now, ncdns is built with Go 1.8.3 due to a planned effort to use The Tor Project’s reproducible build scripts (which currently use Go 1.8.3), but this code will be used in a standalone tool that doesn’t need to be compiled into ncdns, so using Go 1.9 or higher isn’t a problem. (That said, ncdns does have pending PR’s for upgrading to Go 1.9 and Go 1.10, which will be merged when The Tor Project updates their build scripts accordingly.)

The tricky part here is that Go’s standard library’s x509 package does a lot of parsing and converting of the data in certificates, and a large amount of parsing/conversion of the CA certificate that we’re cross-signing introduces an increased risk of something in the certificate being altered in a way that affects certificate validation. This could result in unwanted errors showing up for websites (including non-Namecoin websites), which would be bad enough, but it could also result in certificates being incorrectly accepted – which would mean, among other horrible things, that MITM attacks could be performed, including against non-Namecoin websites.

As one example of bad things that can happen if too much parsing is done, OpenSSL’s command-line tool doesn’t include x509v3 extensions by default when cross-signing. x509v3 extensions are responsible for lots of things, including Basic Constraints (removing this would allow a CA to issue certs via intermediates that might not be valid), Name Constraints (we certainly don’t want to remove all the existing name constraints when we add a .bit-excluding name constraint), and Key Usage and Extended Key Usage (which would make the CA valid for purposes that browsers aren’t supposed to trust it for, e.g. making the CA valid for signing executable code instead of just signing TLS certificates). While I suspect that Go is at least mildly more sane than OpenSSL’s default settings, what I really wanted was a way to simply pass-through as much of the original root CA’s certificate as possible when cross-signing it.

Here’s the Go struct that’s returned when an x509 certificate’s raw DER form is passed to the ParseCertificate function in the standard library’s x509 package:

Holy crap, that’s a lot of parsing that’s clearly happening. Among other things, we can even see special types being used for date/time values, IP addresses, and URL’s. I definitely don’t want all that stuff being added to the attack surface of my cross-signing code – it seems almost certain that some mutation would end up happening, and it would be insane to try to audit the safety of whatever mutation occurs.

However, under the hood, after the above monstrosity is serialized and is ready to be signed, a far more manageable set of structs is used by the x509 package for the actual signing procedure:

Readers unfamiliar with Go programming should note that in Go, structs whose name has an uppercase first letter are public (accessible by code outside the Go standard library’s x509 package), while structs whose name has a lowercase first letter are private (only accessible from the Go standard library’s x509 package). These more manageable structs are private, so we can’t access them directly. But, the important thing to note is that when a raw DER-encoded byte array is parsed into a crazy-complicated Certificate, as well as when a crazy-complicated Certificate is signed (resulting in a raw DER-encoded byte array), these private structs (certificate and tbsCertificate) are used as intermediate steps, and are actually processed by the asn1 package (not the x509 package) when converting to and from raw DER-encoded byte arrays.

The first field (Raw) of each struct simply holds the unparsed binary ASN.1 representation of that struct, and we can generally ignore it. I suspect that the tbs in tbsCertificate stands for “to be signed”, since it contains all of the certificate data that the signature covers (in other words, everything except the signature). The only 5 fields that we actually will want to replace when cross-signing are the following:

In certificate:

SignatureAlgorithm

SignatureValue

In tbsCertificate:

SerialNumber

SignatureAlgorithm

Issuer

When a field is of type asn1.RawValue, it means that Go won’t try to parse its content in any way (it’s just a binary blob). This is exactly the behavior we want for all the fields that we pass-through. Unfortunately, tbsCertificate has a bunch of other fields besides the above 5 replaced fields that aren’t of the type asn1.RawValue. What can we do about that? Well, since these are private structs to the x509 package, we’re clearly going to have to copy their definitions anyway – so how about we simplify them while we’re at it? Below are my modified structs:

From there, I created a modified version of the x509 package’s signing code, which creates a tbsCertificate by parsing the original CA certificate that we’re cross-signing (instead of populating it with serialized data from the crazy-complicated Certificate struct as the x509 package does), and then signs it (via ECDSA) and serializes it (via the asn1 package) as usual. It then outputs a DER-encoded x509 certificate that’s identical to the original root CA cert, except for the 5 fields that I listed above as relevant to cross-signing.

The end result of all this work is a Go library, and associated Go command-line tool, that accepts the following as input:

Original DER-encoded x509 certificate of a root CA to cross-sign

A domain name to blacklist via a name constraint (defaults to .bit)

Subject CommonName prefixes for the root and intermediate CA’s (defaults to “Namecoin Restricted CKBI Root CA for “ and “Namecoin Restricted CKBI Intermediate CA for “), which are prepended to the cross-signed CA’s Subject CommonName when creating the root and intermediate CA’s (the intention here is to make it easy to recognize these CA’s as special based on their names)

DER-encoded x509 certificate of a new intermediate CA (whose private key is destroyed) that has a name constraint, signed by the new root CA

DER-encoded x509 certificate of the input root CA, cross-signed by the new intermediate CA

The root and intermediate CA’s also have a Subject SerialNumber that contains the following message:

“This certificate was created on this machine to apply a name constraint to an existing root CA. Its private key was deleted immediately after the existing root CA was cross-signed. For more information, see [TODO]”

(Of course, the Subject SerialNumber of the intermediate CA is also visible as the Issuer SerialNumber of the cross-signed CA.) The intention here is that if someone encounters one of these certificates, they’ll notice the SerialNumber message and therefore they won’t mistakenly assume that their system has been compromised by a malicious CA certificate. (The “[TODO]” will be replaced later by a URL that contains additional information and explains how to get the source code.) Kudos to Ryan Castellucci for tipping me off that the Subject SerialNumber field was well-suited to this trick.

With this Go command-line tool, I applied a name constraint blacklisting .org to the certificate of DST Root CA X3 (as in my previous post, this is the root CA that Let’s Encrypt uses), and I got 3 new certificates as output. I added those 3 certs to NSS’s sqlite database via certutil (marking the new root CA as trusted), marked the old root CA as untrusted, and tried visiting the same 2 sites that use Let’s Encrypt that I used in my previous post (Technoethical and Namecoin.org). And happily, it worked just as my old OpenSSL+Bash version did: Technoethical loaded without errors (since it’s in the .com TLD, which I didn’t blacklist), but Namecoin.org showed a TLS certificate error (since it’s in the .org TLD that I chose to blacklist).

This worked in both Chromium and Firefox on GNU/Linux. And inspecting the resulting cross-signed certificate shows that all of the x509v3 extensions, validity period, etc. from the original DST Root CA X3 are passed through intact. Yay!

So, what’s next? Right now, this still takes a single root CA as input (and the user is still responsible for passing in the input root CA, and making the needed changes to NSS’s database using the outputted CA’s). I’ve started work on a Go program that will automate this procedure for all root CA’s in NSS; I’d say it’s somewhere around 50% complete. Once it’s complete, this should allow us to continue supporting Chromium on GNU/Linux (even after Chromium removes HPKP), and it should also allow us to add support for Firefox on all OS’s (without requiring any WebExtensions support).

This work was funded by NLnet Foundation’s Internet Hardening Fund.

Namecoin at 34C3: Slides and Videos

Mar 19, 2018 • Jeremy Rand

CCC and Chaos West have posted the videos of Namecoin’s 34C3 talks. I’ve also uploaded the corresponding slides.

Namecoin as a Decentralized Alternative to Certificate Authorities for TLS

Negative Certificate Overrides for p11-kit

Mar 11, 2018 • Jeremy Rand

Fedora stores its TLS certificates via a highly interesting software package called p11-kit. p11-kit is designed to act as “glue” between various TLS libraries, so that (for example) Firefox, Chromium, and OpenSSL all see the same trust anchors. p11-kit is useful from Namecoin’s perspective, since it means that if we can implement Namecoin support for p11-kit, we get support for all the trust stores that p11-kit supports for free. I’ve just implemented a proof-of-concept of negative Namecoin overrides for p11-kit.

As you may recall, the way our Chromium negative overrides currently work is by abusing utilizing HPKP such that public CA’s can’t sign certificates for the .bit TLD. p11-kit doesn’t support key pinning (it’s on their roadmap though!), but there is another fun mechanism we can use to achieve a similar result: name constraints. Name constraints are a feature of the x509 certificate specification that allows a CA certificate to specify constraints on what domain names it can issue certificates for. There are a few standard use cases for name constraints:

A large organization who wants to create a lot of certificates might buy a name-constrained CA certificate from a public CA, and then use that name-constrained CA to issue more certificates for their organization. This reduces the overhead of asking a public CA to issue a lot of certificates on-demand, and doesn’t introduce any security issues because the name constraint prevents the organization from issuing certificates for domain names that they don’t control.

A corporate intranet might create a name-constrained root CA that’s only valid for domain names that are internal to the corporate intranet. This way, employees can install the name-constrained root CA in order to access internal websites, and they don’t have to worry that the IT department might be MITM’ing their connections to the public Internet.

A public CA might have a name constraint in their CA certificate that disallows them from issuing certificates for TLD’s that have unique regulatory requirements. For exampe, the Let’s Encrypt CA has (or at one point had) a name constraint disallowing .mil, presumably because the U.S. military has their own procedures for issuing certificates.

The 1st use case is rarely ever used; I suspect that this is because it poses a risk to commercial CA’s’ business model. The 2nd use case is also rarely ever used; I suspect this is because many corporate IT departments want to MITM all their employees’ traffic, and most employees don’t have much negotiating power on this topic. But the 3rd case is quite interesting… if Let’s Encrypt uses a name constraint blacklisting .mil, could this be used for .bit?

Unfortunately, we obviously can’t expect all of the public CA’s to have any interest in opting into a name constraint disallowing .bit in the way that Let’s Encrypt opted into disallowing .mil. However, there is a fun trick that can solve this: cross-signed certificates. It turns out that it is possible to transform a public root CA certificate into an intermediate CA certificate, and sign that intermediate CA certificate with a root CA that we can create locally (this is called cross-signing). We can then remove the original root CA from the cert store, add our local root CA and the cross-signed CA to the cert store, and everything will work just like it did before. This is helpful because any name constraints that a CA certificate contains will apply to any CA certificate that it cross-signs.

Technically, the RFC 5280 specification says that root CA’s can’t have name constraints. That’s not really a problem though: it just means that we have to create a local intermediate CA (signed by the local root CA) that has the name constraint, and cross-sign the public CA with the name-constrained local intermediate CA. So the cert chain looks like this:

Huge thanks to Crypt32 and davenpcj from Server Fault for first cluing me in that this approach would work. Unfortunately, Crypt32 didn’t provide any sample code, and the sample code from davenpcj didn’t work as-is for me (OpenSSL kept returning various errors when I tried to cross-sign, most of which seemed to be because OpenSSL didn’t like the fact that the public CA hadn’t signed an authorization for me to cross-sign their CA). I eventually managed to cobble together a Bash script using OpenSSL that did work, but I don’t think OpenSSL’s command-line tool is really the right tool for the job (OpenSSL tends to rewrite large parts of the cross-signed certificate in ways that are likely to cause compatibility and security problems). I’m probably going to rewrite this as a Go program.

Anyway, with my Bash script, I decided to apply a name constraint to DST Root CA X3, which is the root CA that Let’s Encrypt uses. The name constraint I applied blacklists the .org TLD (obviously I can’t use .bit for testing this, since no public CA’s are known to have issued a certificate for a .bit domain). And… it works! The Bash script installed the local root CA as a trust anchor for p11-kit, installed the intermediate and cross-signed CA’s as trust-neutral certificates for p11-kit, and installed a copy of the original DST Root CA X3 certificate to the p11-kit blacklist. As a result, both Chromium and Firefox still work fine with Let’s Encrypt for .com websites such as Technoethical, but return an error for .org websites such as Namecoin.org – exactly the behavior we want.

I also made a modified version of my Bash script that installs the modified CA’s into a standard NSS sqlite3 database (without p11-kit), and confirmed that this works with both Firefox and Chromium on GNU/Linux. So p11-kit probably won’t be a hard dependency of this approach, meaning that this approach is likely to work for Firefox on all OS’s, Chromium on all GNU/Linux distros, and anything else that uses NSS.

This code needs a lot of cleanup before it’s ready for release; among the ToDos are:

Port the certificate handling code to a Go program instead of OpenSSL’s command line.

Automatically detect which root CA’s exist in p11-kit, and apply the name constraint to all of them, instead of only using DST Root CA X3.

Automatically detect when a public root CA is deleted from p11-kit (e.g. WoSign), and remove the name-constrained CA that corresponds to it.

Preserve p11-kit’s attached attributes for trust anchors.

Make the procedure idempotent.

Test whether this works as intended for other p11-kit-supported libraries (Firefox and Chromium use NSS; p11-kit also supports OpenSSL, Java, and GnuTLS among others).

Test whether a similar approach with name constraints can work for CryptoAPI (this would be relevant for most non-Mozilla browsers on Windows).

I’m hopeful that this work will allow us to continue supporting Chromium on GNU/Linux after Chromium removes HPKP, and that it will nicely complement the Firefox positive override support that I added to ncdumpzone.

It should be noted that this approach definitely will not work on most non-Mozilla macOS browsers, because macOS’s TLS implementation does not support name constraints. I’m not aware of any practical way to do negative overrides on macOS (besides the deprecated HPKP support in Chromium), so there’s a chance that when we get around to macOS support, we’ll just not do negative overrides for macOS (meaning that while Namecoin certificates would work on macOS without errors, malicious public CA’s would still be able to do MITM attacks against macOS users just like they can for DNS domain names). Firefox on macOS shouldn’t have this problem, since Firefox doesn’t use the OS for certificate verification.

This work was funded by NLnet Foundation’s Internet Hardening Fund.

Adding a Firefox TLS Mode to ncdumpzone

Feb 20, 2018 • Jeremy Rand

Firefox stores its list of certificate overrides in a text file. While it’s not feasible to edit this text file while Firefox is running (Firefox only loads it on startup) I’ve experimentally found that it is completely feasible to create positive overrides if you shut off Firefox while the override is being created. But is this a reasonable expectation for Namecoin? Actually yes. Here’s how we’re doing it:

Note that most Namecoin users are doing name lookups via one of the following clients:

Namecoin Core (in full node mode)

Namecoin Core (in pruned mode)

libdohj-namecoin (in leveldbtxcache mode)

These have an important feature in common: they all keep track of the UNO (unspent name output) set locally. That means that you don’t need to wait for a DNS lookup to be hooked before you process a TLSA record from the blockchain – you can process TLSA records in advance, before Firefox even boots!

Note that the above is not true for the following cases:

libdohj-namecoin (in API server mode)

.bit domains that are delegated to a DNSSEC server

So, what we need is a tool that walks the entire Namecoin UNO set, processes each name, and writes out some data about the TLSA records. Coincidentally, this is very similar to what ncdumpzone does. ncdumpzone is a utility distributed with ncdns. It exports a DNS zonefile of the .bit zone, which is intended for users who for some reason want to use BIND as an authoritative nameserver instead of using ncdns directly. However, with some minimal tweaking, we can make ncdumpzone export the .bit zone in some other format… such as a Firefox certificate override list format.

These 4 lines correspond to the only 4 TLSA records that exist in the Namecoin blockchain right now. Obviously, the first part of each line is the domain name and port of the website. OID.2.16.840.1.101.3.4.2.1 signifies that the fingerprint uses the SHA256 algorithm. (This is the only one that Firefox has ever supported, but Firefox is designed to be future-proof in case a newer hash function becomes necessary.) Next comes the fingerprint itself, in uppercase colon-delimited hex. U indicates that the positive override is capable of overriding an “untrusted” error (it can’t override validity period or domain name mismatch errors). The interesting part is AAAAAAAAAAAAAAAAAAAAAA==, which Mozilla’s source code refers to as a dbKey. Mozilla’s source code always calculates this using the issuer and serial number of the certificate. However, empirically it works just fine if I instead use all 0’s (in the same base64 encoding). Looking at the Mozilla source code, the dbKey isn’t actually utilized in the process of checking whether an override exists. I’m not certain exactly what Mozilla is using it for (it seems to be used in a code path that’s related to enumerating all the overrides that exist). Since the issuer and serial number aren’t always derivable from TLSA records (you generally need either a dehydrated certficate or a full certificate for this; my goal here is to work even if only the SHA256 fingerprint of a cert is known), we just set it to all 0’s.

Copying the above output into Firefox’s cert override file, and then starting up Firefox, I was able to access the Namecoin forum’s .bit domain without any TLS errors. I’ve submitted a PR to ncdns.

While I was writing this code, I noticed that ncdns was actually calculating TLSA records incorrectly. One of the bugs in TLSA record calculation was already known (Jefferson Carpenter reported it last month), while the other was unnoticed (the TLSA records contained a rehydrated certificate that accidentally included a FQDN as the domain name; the erroneous trailing period caused the signatures and fingerprints to fail verification). The fact that these bugs in my TLSA code remained unnoticed for about a year seems to be evidence that no one is actually using TLSA over DNS with Namecoin in the real world; the only Namecoin TLS users are using ncdns’s certinject feature, which did not have this bug.

It should be noted that this approach isn’t secure in the sense that Namecoin TLS with Chromium is, because it doesn’t provide negative overrides (meaning that a public CA could issue a malicious .bit certificate that wouldn’t be blocked by this method). However, positive and negative overrides are mostly orthogonal goals in terms of implementation, so this is huge progress while we wait for proper WebExtensions support for TLS overrides. I also think it’s likely to be feasible to implement negative overrides using NSS, in a way that Firefox will honor. More on that in a future post.

This work was funded by NLnet Foundation’s Internet Hardening Fund.

Positive TLS Certificate Overrides for NSS

Feb 18, 2018 • Jeremy Rand

NSS is the TLS implementation used by various applications, including Chromium on GNU/Linux and Firefox on all platforms. I’ve finished initial support for positive cert overrides in NSS, and have submitted a PR that is now awaiting review.

I had previously written a WIP PR that implemented positive overrides for NSS, but it worked by using an NSS database directory that was auto-detected based on the active user’s home directory. This seemed like a clever usability trick, but it had 2 severe disadvantages:

Some applications don’t use the shared NSS database, but instead use their own. Firefox is one of these applications.

For security reasons, we want ncdns to run as its own user with restricted permissions. This would break the database directory auto-detection.

The new PR has explicit configuration options for which NSS database directory is used. For example, the following command line config:

Allowed the Namecoin Forum’s .bit domain to load in Chromium in my Fedora VM without any TLS errors. Obviously, this would need to be combined with the negative override functionality provided by the tlsrestrict_chromium_tool program (included with ncdns) in order to actually have reasonable security (otherwise, public TLS CA’s could issue .bit certs that would still be accepted by Chromium).

Some remaining challenges:

NSS’s certutil is extremely slow, due to failure to properly batch operations into sqlite transactions. I’ve filed a bug about this with Mozilla. Until this is fixed, expect an extra ~800ms of latency when accessing Namecoin HTTPS websites. Possible future workarounds:

Run certutil sometime before the DNS hook, so that 800ms of latency isn’t actually noticeable for the user. (More on this in a future post.)

Do some highly horrifying LD_PRELOAD witchcraft in order to fix the crappy sqlite usage.

Use a different pkcs11 backend instead of NSS’s sqlite3 backend. (Yes, NSS uses pkcs11 behind the scenes. More on this in a future post.)

Use a Firefox-specific positive override mechanism. (More on this in a future post; also see the WebExtensions posts.)

Inject CA certs rather than end-entity certs. (More on this in a future post.)

Some NSS applications don’t use the sqlite backend, but instead use BerkeleyDB as a backend. BerkeleyDB can’t be opened concurrently by multiple applications, so ncdns can’t inject certs while another application is open. Possible future workarounds:

Use an environment variable to force sqlite usage.

Run certutil while the database isn’t open.

Use a different pkcs11 backend.

Wait for those applications to switch to sqlite. (Firefox switched in Firefox 58, and it appears more applications may follow suit.)

That said, despite the need for future work, this PR makes Namecoin TLS fully functional in Chromium on GNU/Linux. (Until negative overrides stop working due to HPKP being removed… more on potential fixes in a future post.)

In Phase 5 of Namecoin TLS for Firefox, I discussed the performance benefits of moving the positive override cache from JavaScript to C++. I’ve now implemented preliminary work on doing the same for negative overrides.

The code changes for negative overrides’ C++ cache are analogous to those for positive overrides, so there’s not much to cover in this post about those changes. However, I did take the chance to refactor the API between the C++ code and the JavaScript code a bit. Previously, only 1 WebExtension was able to perform overrides; if multiple WebExtensions registered for the API, whichever replied first would be the only one that had any effect. Now, each WebExtension replies separately to the Experiment, and the Experiment passes those replies on to the C++ code. The Experiment also notifies the C++ code when all of the WebExtensions have replied. Although this does add some extra API overhead, it has the benefit of allowing an override to take place immediately if a single WebExtension has determined that it should take place, even if the other (irrelevant) WebExtensions are still evaluating the certificate.

Unfortunately, at this point I merged upstream changes from Mozilla into my Mercurial repository, only to find that there was now a compile error. I’m still figuring out exactly why this compile error is happening. It looks like it’s unrelated to any of the files that my patch touches; I suspect that it’s due to my general lack of competence with Mercurial (Mozilla’s codebase is the first time I’ve used Mercurial) or my similar general lack of competence with Mozilla’s build system.

So, until I actually get the code to build, I won’t be able to do performance evaluations of these changes. Hence why there are no pretty graphs in this post.

Meanwhile, I reached out to Mozilla to get some feedback on the general approach I was taking. (I had previously discussed high-level details with Mozilla, but this time I provided a WIP code patch, so that it would be easier to evaluate whether I was doing anything with the code that would be problematic.) This resulted in a discussion about what methods should be used to prevent malicious or buggy extensions from causing unexpected damage to user security. This is definitely a legitimate concern: messing with certificate verification is dangerous when done improperly, and it’s important that users understand what they’re getting when they install a WebExtension that might put them at risk. That discussion is still ongoing, and it’s not clear yet what the consensus will arrive at.

(It should be noted that there are some alternative approaches to Firefox support for Namecoin TLS underway as well, which will be covered in a future post.)

This work was funded by NLnet Foundation’s Internet Hardening Fund.

Pruning of Non-scriptPubKey Data in libdohj

Feb 11, 2018 • Jeremy Rand

Our lightweight SPV client’s leveldbtxcache mode is the most secure of the various Namecoin lightweight SPV modes. Its storage requirements aren’t too bad either (129.1 MB at the moment for Namecoin mainnet). However, while 129.1 MB of storage isn’t a dealbreaker, it’s still a bit borderline on mobile devices. We can do better.

First, a reminder of how leveldbtxcache currently works. Initially, the IBD (initial blockchain download) proceeds the same way a typical lightweight SPV Bitcoin client (such as Schildbach’s Android Bitcoin Wallet) would work: it downloads blockchain headers, aiming for the chain with the most work. However, at the point when the IBD has reached 1 year ago in the blockchain, it begins downloading full blocks instead of block headers. The full blocks aren’t saved; they’re used temporarily for 2 purposes: verifying consistency with the block headers’ Merkle root (thus ensuring that no transactions have been censored), and adding any name_anyupdate transactions to a LevelDB database that allows quick lookup of names. After those 2 things have been processed, the full blocks are discarded. The 129.1 MB storage figure is as low as it is because we’re only storing name transactions from the last year (plus block headers, which are negligible in size).

However, there’s a lot of data in name transactions that we don’t actually need in order to look up names: currency data, signatures, and transaction metadata.

Currency data exists in name transactions because name operations cost a transaction fee, so there will typically be a currency input and a currency output in any name transaction. We don’t need this information in order to look up names. Signatures are used for verifying new transactions, but are not needed to look up previously accepted transaction data. Transaction metadata, such as nVersion and nLockTime, is also not needed to look up names. There may be other sources of unwanted data too.

To improve the situation, I’ve just modified leveldbtxcache so that, instead of storing full name transactions in LevelDB, it only stores the scriptPubKey of the name output. This includes the name’s identifier and value, as well as the Bitcoin-compatible scriptPubKey that can be used to verify future signatures. It’s a relatively straightforward change to the code, although it does break backward-compatibility with existing name databases (so you’ll need to delete your blockchain and resync after updating).

So, how does this fare?

Full Transactions

Only scriptPubKey

Storage Used after IBD

129.1 MB

63.7 MB

Time Elapsed for IBD

~9 Minutes

~6 minutes

Not bad, and we even got a faster IBD as a bonus. (This suggests that the bottleneck, at least on my laptop running Qubes with an HDD, was storage I/O.)

I’ve just submitted this change to upstream libdohj.

This work was funded by NLnet Foundation’s Internet Hardening Fund.

Namecoin Core 0.15.99-name-tab-beta1 Released

Feb 1, 2018 • Jeremy Rand

Namecoin Core 0.15.99-name-tab-beta1 has been released on the Beta Downloads page. Changes since 0.13.99-name-tab-beta1:

New features:

GUI

Significant rewrite of name GUI. (Patch by brandonrobertz.) In particular, please torture-test the following:

Full flow for registering names.

Full flow for updating and renewing names.

State display in the names list.

The above with mainnet, testnet, and regtest networks.

The above with encrypted locked, encrypted unlocked, and unencrypted wallets.

RPC

Remove name operation from createrawtransaction RPC method; add namerawtransaction RPC method. This paves the way for various future improvements to the name GUI, including coin control, anonymity, fee control, registration without unlocking the wallet twice, and decreased transaction size. You’ll need to update your scripts if you currently use the raw transaction API for name transactions. (Reported by JeremyRand, patch by domob1812.)

Restore getblocktemplate RPC method. This improves workflow for software used by Bitcoin mining pools. (Reported by DrHaribo, patch by domob1812.)

Add createauxblock and submitauxblock RPC methods. This improves workflow for software used by Bitcoin mining pools. (Patch by bitkevin.)

Fixes:

GUI

Fix pending name registration bug, where the GUI requests a wallet unlock over and over and then errors with name registered. (Patch by brandonrobertz.)

Fix bug where names weren’t showing up in the Manage Names list properly until client had been restarted. (Patch by brandonrobertz.)

P2P

Update seed nodes. This should decrease likelihood of getting stuck without peers. (Patch by JeremyRand.)

Unfortunately, Windows and macOS builds are broken in this release, so only GNU/Linux binaries are available. We expect Windows and macOS builds to be restored for the 0.15.99-name-tab-beta2 release, which is coming soon.

Recent Reports of Ransomware Using Namecoin are Missing the Real Story

Jan 30, 2018 • Jeremy Rand

Some reports are making the rounds that a new ransomware strain, “GandCrab”, is using Namecoin for C&C. While this may sound interesting, as far as I can tell these reports are missing the real story.

Another interesting feature is GandCrab’s use of the NameCoin .BIT top-level domain. .BIT is not a TLD that is recognized by the Internet Corporation for Assigned Names and Numbers (ICANN), but is instead managed by NameCoin’s decentralized domain name system.

The developers of GandCrab are using NameCoin’s DNS as it makes it harder for law enforcement to track down the owner of the domain and to take the domains down.

This doesn’t make much sense, since Namecoin isn’t anonymous (so tracking down the owner of the domain is relatively straightforward for law enforcement). But more to the point, most Internet users don’t have Namecoin installed, and it would be rather odd for ransomware to bundle a Namecoin name lookup client. This confusion is explained by Bleeping Computer (to their credit):

This means that any software that wishes to resolve a domain name that uses the .BIT tld, must use a DNS server that supports it. GandCrab does this by making dns queries using the a.dnspod.com DNS server, which is accessible on the Internet and can also be used to resolve .bit domains.

Yep, if this is to be believed, GandCrab isn’t actually using Namecoin, they’re using a centralized DNS server (a.dnspod.com) which nominally claims to mirror the namespace of Namecoin. This means that, if law enforcement wants to censor the C&C .bit domains, they don’t need to censor Namecoin (which would be rather difficult), they simply need to look up who owns dnspod.com (under ICANN policy, looking this up is straightforward for law enforcement) and send them a court order to censor the C&C domains.

However, Bleeping Computer is actually substantially wrong on this point. Why? Take a look in the Namecha.in block explorer at the Namecoin value of bleepingcomputer.bit, which is one of the alleged C&C domains:

{"ns":["A.DNSPOD.COM","B.DNSPOD.COM"]}

First off, note that this is actually a completely invalid Namecoin configuration, because the trailing period is missing from the authoritative nameserver addresses, so any DNS software that tries to process that Namecoin domain will return SERVFAIL. Second, note that the authoritative nameservers listed are… the dnspod.com nameservers. This makes it pretty clear that a.dnspod.com isn’t actually a Namecoin DNS inproxy. If it were, and even if the trailing-period fail were corrected in the Namecoin value, the inproxy would end up in a recursion loop. a.dnspod.com is actually just a random authoritative nameserver that happens to be serving records for a domain name that ends in .bit. Namecoin isn’t used anywhere by GandCrab, and killing the Namecoin domain wouldn’t have any effect on GandCrab. Of course, this raises questions about why exactly that domain name is even registered in Namecoin. The simplest explanations are:

The GandCrab developers are massively incompetent, and have potentially deanonymized themselves by registering a Namecoin domain despite not ever using that Namecoin domain for their ransomware.

Someone unrelated to GandCrab has registered that Namecoin domain for the purpose of trolling security researchers.

It’s conceivable that dnspod.com’s nameservers will only allow a .bit domain’s records to be served from their systems if that .bit domain’s Namecoin data points to dnspod.com’s namservers, and it’s further possible that their systems are misconfigured to not notice that the trailing period is missing. However, this seemed rather unlikely to me. Why? Well, first, take a look at the WHOIS data for dnspod.com:

So DNSPod is apparently an ICANN-accredited DNS registrar, with a primary domain name in China. Which of these scenarios fits better with Occam’s Razor:

A DNS registrar located in China (which is not exactly known for its government’s respect for Namecoin’s values of free speech), which is accredited by ICANN (which doesn’t recognize Namecoin as either a DNS TLD or a special-use TLD), is doing special processing of domain names that they host in order to respect the authority of Namecoin. They’ve also never contacted the Namecoin developers to inform us that they’re doing this, nor are any of their technical people active in the Namecoin community.

DNSPod simply doesn’t care what domains their customers host on their nameservers, since if DNSPod’s nameservers aren’t authorized by a domain’s NS record in the DNS, nothing bad will happen anyway (DNSPod simply won’t receive any requests for that domain).

In addition, if DNSPod had such a policy, it’s not clear how exactly their customers would be able to switch their Namecoin domains to DNSPod nameservers without encountering downtime while DNSPod was waiting for the Namecoin transaction to propagate.

However, since empiricism is informative, Ryan Castellucci tested this with an account on DNSPod, and confirmed that no such validation occurs:

Ryan tried registering a .onion domain and bleepingcomputer.malware on DNSPod as well, but these were rejected as invalid TLD’s. Ryan and I have no clue why .bit is on DNSPod’s TLD whitelist while .onion isn’t – probably because a customer asked for it and DNSPod just doesn’t care.

Ryan isn’t aware of any prior cases where a malware C&C was set up in a random free authoritative DNS provider such as DNSPod, with the DNS servers hardcoded in the malware. It’s an interesting strategy for malware authors, since authoritative DNS providers usually don’t bother to confirm domain name ownership. Entertainingly, Ryan found that DNSPod isn’t verifying ownership of the email addresses used to register accounts either.

So in conclusion: while this is a rather interesting case of a possible hilarious opsec fail by a ransomware author (which very well might get them arrested), and the strategy of using authoritative DNS hosting providers for malware C&C is fascinating as well, the ransomware itself is fully irrelevant to Namecoin.

34C3 Summary

Jan 3, 2018 • Jeremy Rand

As was previously announced, Jonas Ostman and I (Jeremy Rand) represented Namecoin at 34C3 in Leipzig, Germany. This was our first Congress, so we didn’t quite know what to expect, but we were pretty confident that it would be awesome. We were not disappointed. The CCC community is well-known for being friendly and welcoming to newcomers, and we greatly enjoyed talking to everyone there.

Namecoin gave 3 talks, all of which were hosted by the Monero Assembly and the Chaos West Stage. I expect 2 of those talks to have videos posted sometime later this month. Unfortunately, 1 of the talks suffered an audio issue in the recording, so it won’t have a video posted (but I will post the slides of that talk, as well as the other talks’ slides). The talks’ titles are:

Namecoin as a Decentralized Alternative to Certificate Authorities for TLS

Namecoin for Tor Onion Service Naming (And Other Darknets) (No video will be posted due to audio recording technical issues)

A Blueprint for Making Namecoin Anonymous

As usual for conferences that we attend, we engaged in a large number of conversations with other attendees. Also as usual, I won’t be publicly disclosing the content of those conversations, because I want people to be able to talk to me at conferences without worrying that off-the-cuff comments will be broadcast to the public. That said, I can say that a lot of very promising discussions happened regarding future collaboration with allied projects, and we’ll make any relevant announcements when/if such collaborations are formalized.

Huge thank you to the following groups who facilitated our participation:

We definitely intend to return for 35C3 in December 2018. Until then, Tuwat!

Namecoin’s Jeremy Rand and Jonas Ostman will be at 34C3

Dec 23, 2017 • Jeremy Rand

Namecoin developers Jeremy Rand and Jonas Ostman will attend 34C3 (the 34th Chaos Communication Congress) in Leipzig, December 27-30. There’s a good chance that the 34C3 Monero Assembly will host some Namecoin talks. We’re looking forward to the congress!

Version 0.2.7 Beta 1 of the Namecoin Lightweight SPV Lookup Client has had its source code released. Build instructions are here (it’s the “bleeding-edge branch”). Binaries will be made available later. Meanwhile, the former bleeding-edge branch (the branch that introduced leveldbtxcache mode) has graduated to partially-stable. The former partially-stable branch has been deprecated.

Happily, the 0.2.7 Beta 1 release is using an unmodified upstream libdohj, since Ross Nicoll from Dogecoin has merged all of my changes. I’m still working to get the relevant ConsensusJ (formerly bitcoinj-addons) code merged upstream. As usual, the SPV client is experimental. Namecoin Core is still substantially more secure against most threat models.

Update on Namecoin Core Qt Development

Nov 20, 2017 • Brandon Roberts

It’s been roughly a year since the initial manage names tab
code was ported from legacy Namecoin to Namecoin Core. Since then, development
of namecoin-qt has been progressing on two fronts: merging the manage names Qt
interface into Namecoin Core’s master branch and the development of a d/ spec
DNS configuration interface.

In October, I initiated a pull request
to merge the manage names tab into Namecoin Core’s master branch.
This patch replaces the previous, experimental manage names (v0.13.99) interface and
brings it current with Namecoin Core version 0.15.99. There have been many
bugfixes and improvements, so we welcome users to test the code and report any
issues.

Secondly, thanks to support
from the NLnet Foundation’s Internet Hardening Fund, I’ve began developing the
DNS configuration dialog. The goal is to make managing Namecoin domains much simpler
and will remove the need for many users to build d/ spec JSON documents
altogether. We currently have a set of mocks available
for comments from users. Until a pull request is issued, development can be tracked
in the manage-dns
branch of my Namecoin Core repo.

Development is moving quickly, so I will continue to update the community as
things progress. As usual, you can follow our work in the GitHub repo
and on the Namecoin Subreddit.

Namecoin will be at the 2017 Oklahoma City Fall Peace Festival

Nov 10, 2017 • Jeremy Rand

Namecoin will have a table at the 2017 Fall Peace Festival in Oklahoma City on November 11. If you happen to be in the OKC area, feel free to stop by and say hello.

What Chromium’s Deprecation of HPKP Means for Namecoin

Nov 5, 2017 • Jeremy Rand

Readers who’ve been paying attention to the TLS scene are likely aware that Google has recently announced that Chromium is deprecating HPKP. This is not a huge surprise to people who’ve been paying attention; HPKP has had virtually no meaningful implementation by websites, and many security experts have been warning that HPKP is too dangerous for most deployments due to the risk that websites who use it could, with a single mistake, accidentally DoS themselves for months. The increased publicity of the RansomPKP attack drove home the point that this kind of DoS could even happen to websites who don’t use HPKP. I won’t comment on the merits of HPKP for its intended purpose. However, readers familiar with Namecoin will probably be aware that Namecoin’s TLS support for Chromium relies on HPKP. So, what does HPKP’s deprecation mean for Namecoin?

First off, nothing will happen on this front until Chrome 67, which is projected to release as Stable on May 29, 2018. (Users of more cutting-edge releases of Chromium-based browsers will lose HPKP earlier.) When Chrome 67 is released, I expect that the following behavior will be observed for the current ncdns release (v0.0.5):

The ncdns Windows installer will probably continue to detect Chromium installations (because the HPKP state shares a database with the HSTS state, which isn’t going anywhere). The NUMS HPKP installation will appear to succeed.

ncdns will continue to be able to add certificates to the trust store. This means that .bit websites that use TLS will continue to load without errors.

The NUMS HPKP pin will silently stop having any effect. This means that Namecoin’s TLS security will degrade to that of the CA system. A malicious CA that is trusted by Windows will be able to issue certificates for .bit websites, and Chromium will accept these certificates as valid even when they don’t match the blockchain.

Astute readers will note that this is the 4th instance of a browser update breaking Namecoin TLS in some way. (The previous 3 cases were Firefox breaking Convergence, Firefox breaking nczilla, and Firefox breaking XPCOM.) In this case, we’re reasonably well-prepared. Unlike the Convergence breakage (which Mozilla considered to be a routine binary-size optimization) and the nczilla breakage (which Mozilla considered to be a security patch), HPKP is a sufficiently non-niche functionality that we’re finding out well in advance (much like the XPCOM deprecation). As part of my routine work at Namecoin, I make a habit of studying how TLS certificate verification works in various common implementations, and regularly make notes on ways we could hack them in the future to use Namecoin. Based on a cursory review of my notes, there are at least 7 possible alternatives to Chromium’s HPKP support that I could investigate for the purpose of restoring Namecoin’s TLS security protections in Chromium. 3 of them would be likely to qualify for NLnet funding, if we decided to divert funding from currently-planned NLnet-funded tasks. (It’s not clear to me whether we actually will divert any funding at this point, but we do have the flexibility to do so if it’s needed.) None of those 7 alternative approaches are quite as nice as our NUMS HPKP approach (which is why we’ve been using NUMS HPKP up until now), but such is life.

In conclusion, while this news does highlight the maintenance benefits of using officially approved API’s rather than hacks (which, it should be noted, is my current approach for Firefox), at this time there is no reason for Namecoin to drop TLS support for Chromium. I’m continuing to evaluate what our best options are, and I’ll report back when I have more information.

This work was funded by NLnet Foundation’s Internet Hardening Fund.

Namecoin TLS for Firefox: Phase 5 (Moving the Override Cache to C++)

Oct 29, 2017 • Jeremy Rand

In Phase 4 of Namecoin TLS for Firefox, I mentioned that more optimization work remained (despite the significant latency improvements I discussed in that post). Optimization work has continued, and I’ve now moved the override cache from JavaScript to C++, with rather dramatic latency improvement as a result.

Prior to this optimization, my C++ code would synchronously call the WebExtensions Experiment to retrieve override decisions, and the Experiment would block until the WebExtension had returned an override decision. At this point the decision would be added to the cache within the Experiment, and then the C++ code’s call to the Experiment would return. I had long suspected that this was a major latency bottleneck, for 2 reasons:

JavaScript generally is inefficient.

After Firefox’s built-in certificate verification completed, control had to flow from C++ to JavaScript (which adds some latency) and then from JavaScript back to C++ (which adds latency as well).

With my latest changes, the control flow changes a lot:

The synchronous API for the C++ code to get positive override decisions from the Experiment is removed.

The override decision cache in the Experiment is removed.

An override decision cache is added to the C++ code.

An API is added to the C++ code for the Experiment to notify when an override decision has just been received from the WebExtension. This API adds the decision to the C++ override decision cache.

The C++ code that gets a positive override now simply blocks until an override decision has appeared in the cache; it doesn’t make any calls to JavaScript of any kind.

The advantages of this are:

A lot of inefficient JavaScript code is removed from the latency-critical code paths, in favor of more efficient C++.

Control never flows from C++ to JavaScript in order to retrieve the override decision (saves latency), and the flow from JavaScript to C++ can occur in parallel with Firefox’s built-in certificate verification (saves latency as well).

I was originally hoping to use a thread-safe data structure for the C++ override decision cache, and noticed that Mozilla’s wiki mentioned such a data structure. However, I couldn’t actually find that data structure in Mozilla’s source code. After a few hours of grepping and no luck figuring out what the wiki was referring to, I asked on Mozilla’s IRC, and was told that the wiki was out of date and that the thread-safety features of that data structure were long ago removed. So, the cache is only accessible from the main thread, and cross-thread C++ calls will still be needed to access it from outside the main thread. This isn’t really a disaster; cross-thread C++ calls aren’t massively bad.

Since I wrote up some really nice scripts for measuring latency for Phase 4, I reused them for Phase 5 to see how things have improved.

This is a quite drastic speedup. The gradual speedup over time has vanished, which suggests that I was right about it being attributable to the JavaScript JIT warming up. (However, it should be noted that this time I did a single batch of 45 certificate verifications, so this may be an artifact of that change too.) More importantly, based on the fact that uncached and cached overrides are indistinguishable in the vast majority of cases, it can be inferred that the Experiment’s decision usually enters the C++ code’s decision cache before Firefox’s built-in certificate verification even finishes. (The occasional spikes in uncached latency seem to correspond to cases where that’s false.)

It should be noted that negative overrides haven’t yet been converted to use the C++ override decision cache. I expect them to be slightly slower than these figures, because negative overrides will have 1 extra cross-thread C++ call.

The same disclaimer as before applies: this data is not intended to be scientifically reproducible; there are likely to be differences between setups that could impact the latency significantly, and I made no effort to control for or document such differences. That said, it’s likely to be a useful indicator of how well we’re doing.

At this point, I am fully satisfied with the performance that I’m getting in these tests of positive overrides. Converting negative overrides to work similarly is expected to be easy. Of course, performance will probably be noticeably worse once the WebExtension is calling ncdns, so there’s a good chance that after ncdns is integrated with the WebExtension, I’ll be coming back to optimization.

For the short-term though, I’ll be focusing on integrating the WebExtension with ncdns.

This work was funded by NLnet Foundation’s Internet Hardening Fund.

Namecoin TLS for Firefox: Phase 4 (Fun with Threads)

Oct 11, 2017 • Jeremy Rand

In Phase 2 of Namecoin TLS for Firefox, I mentioned that negative certificate verification overrides were expected to be near-identical in code structure to the positive overrides that I had implemented. However, as is par for the course, Murphy’s Law decided to rear its head (but Murphy has been defeated for now).

The specific issue I encountered is that while positive overrides are called from the main thread of Firefox, negative overrides are called from a thread dedicated to certificate verification. WebExtensions Experiments always run on the main thread. This means that the naive way of accessing an Experiment from the C++ function that would handle negative overrides causes Firefox to crash when it detects that the Experiment is being called from the wrong thread.

Luckily, I had recently gained some experience doing synchronous cross-thread calls (it’s how the Experiment calls the WebExtension), and converting that approach from JavaScript to C++ wasn’t incredibly difficult. (The only irritating part was that the Mozilla wiki’s C++ sample code for this hasn’t been updated for years, and Mozilla’s API has made a change since then that makes the sample code fail to compile. It wasn’t too hard to figure out what was broken, though.)

After doing this, I was able to get my WebExtensions Experiment to trigger negative certificate verification overrides.

Meanwhile, I talked with David Keeler from Mozilla some more about performance, and it became clear that some additional latency optimizations beyond Phase 3 were going to be highly desirable. So, I started optimizing.

The biggest bottleneck in my codebase was that basically everything was synchronous. That means that Firefox verifies the certificate according to its normal rules, and only then passes the certificate to my WebExtensions Experiment and has to wait for the Experiment to reply before Firefox can proceed. Similarly, the WebExtensions Experiment has to wait for the WebExtension to reply before the Experiment can reply to Firefox. This means 2 layers of cross-thread synchronous calls, one of which is entirely in JavaScript (and is therefore less efficient).

The natural solution is to try to make things asynchronous. I decided to start with making the Experiment’s communication with the WebExtension asynchronous. This works by adding a new non-blocking function to the Experiment (called from C++), which simply notifies it that a new certificate has been seen. This is called immediately after Firefox passes the certificate to its internal certificate verifier (and before Firefox’s verification happens), which allows the Experiment and the WebExtension to work in parallel to Firefox’s certificate verifier. When the WebExtension concludes whether an override is warranted, it notifies the Experiment, which stores the result in a cache (right now this cache is a memory leak; periodically clearing old entries in the cache is on the to-do list).

Once Firefox has finished verifying the certificate, it asks the Experiment for the override decision, but now the Experiment is likely to already have the required data (or at least be a lot closer to having it). The C++ to Experiment cross-thread call is still synchronous (for now), but the impact on overall latency is greatly reduced.

Unfortunately, at this point Murphy decided he wanted a rematch. My code was consistently crashing Firefox sometime between the C++ code issuing a call to the Experiment and the Experiment receiving the call. I guessed that this was a thread safety issue (Mozilla doesn’t guarantee that the socket info or certificate objects are thread-safe). And indeed, once I modified my C++ code to duplicate the relevant data rather than passing a pointer to a thread, this was fixed. Murphy didn’t go away without a fight though – it looks like Mozilla’s pointer objects also aren’t thread-safe, so I needed to use a regular C++ pointer instead of Mozilla’s smart pointers. (For now, that means that my code has a small memory leak. Obviously that will be fixed later.)

After doing all of the above, I decided to check performance. On both my Qubes installation and my bare-metal Fedora live system, the latency from positive overrides is now greatly reduced. Below are graphs of the latency added by positive overrides on my bare-metal Fedora live system:

The graphs appear to show a noticeable speedup over time. Part of this is likely to be attributable to the JavaScript JIT warming up. Another part of it may be an artifact of the script I used to make Firefox verify the certificates: I first did 3 batches of 5 certs, then a batch of 10 certs, then a batch of 20 certs, for a total of 45 certificate verifications. The graphs also show that certificates that were previously cached tended to verify faster; this is because the cache is located in the Experiment rather than the WebExtension, which eliminates a cross-thread call.

It should be noted that this data is not intended to be scientifically reproducible; there are likely to be differences between setups that could impact the latency significantly, and I made no effort to control for or document such differences. That said, it’s likely to be a useful indicator of how well we’re doing. My opinion is that this is much, much closer to a performance impact that Mozilla would plausibly be willing to merge, compared to the performance before this optimization. However, additional work is still warranted. (And, of course, it’s Mozilla’s opinion, not mine, that matters here!)

There are 2 additional major optimizations that I intend to do (which aren’t yet started):

Make the C++ to Experiment calls asynchronous. This way, the C++ code doesn’t need to issue a synchronous cross-thread call to retrieve the override data from the Experiment.

Add an extra asynchronous call that lets Firefox notify the Experiment and the WebExtension as soon as it knows that a TLS handshake is likely to occur soon for a given domain name. In Namecoin’s case, this gives the WebExtension a chance to ask ncdns for the correct certificate before Firefox even begins the TLS handshake. That way, by the time the observed certificate gets passed to the WebExtension, it will be likely to already know how to verify it.

At this point I’m not 100% certain whether I’ll choose to do more optimization next, or if I’ll focus on hooking the WebExtension into ncdns.

This work was funded by NLnet Foundation’s Internet Hardening Fund.

Namecoin TLS for Firefox: Phase 3 (Latency Debugging)

Oct 7, 2017 • Jeremy Rand

I recently mentioned performance issues that I observed with the Firefox TLS WebExtensions Experiment. I’m happy to report that those performance issues appear to have been a false alarm, due to 2 main reasons:

I initially observed the performance issues in a Fedora VM inside Qubes. I decided on a hunch to try the same code on a Fedora live ISO running on bare metal, and the worst-case latency decreased from ~20 ms to ~6 ms. The spread also decreased a lot. I can think of lots of reasons why Qubes might have caused performance issues, e.g. the use of CPU paravirtualization, the I/O overhead associated with Xen, the limit on logical CPU cores inside the VM, the use of memory ballooning, competition from other VM’s for memory and CPU, and probably lots of other reasons. In any event, my opinion is that the fault here lies in Qubes, not my code.

The Firefox JavaScript JIT seems to improve performance of the WebExtensions Experiment and of the WebExtension each time it runs. After running the same code (on bare-metal Fedora) against 6 different certificates, the latency decreased from ~6 ms to ~1 ms, and it was still monotonically decreasing at the 6th cert. Testing this code with many repeats is tricky, because it caches validation results per host+certificate pair, and I don’t have a large supply of invalid certificates to test with.

The happy side effect of this debugging is that my current code is now quite a lot closer to Mozilla’s standard usage patterns, since I spent a lot of time trying to figure out whether something that I was deviating from Mozilla on was responsible for the latency. Huge thanks to Andrew Swan from Mozilla for providing lots of useful tips in this area.

I believe that there are still several optimizations that could be made to this code, but for now I’m reasonably satisfied with the performance. (Whether Mozilla will want further optimization is unclear; I’ll ask them later.) My next step will be to set up negative overrides in the WebExtensions Experiment and the WebExtension. After that, I’ll be looking into actually making the WebExtension ask Namecoin for the cert data (instead of returning dummy data). Then comes code cleanup, code optimizations, and a patch submission to Mozilla.

This work was funded by NLnet Foundation’s Internet Hardening Fund.

Namecoin TLS for Firefox: Phase 2 (Overrides in a WebExtension)

Sep 30, 2017 • Jeremy Rand

As I mentioned earlier, I’ve been hacking on a fork of Firefox that exposes an API for positive and negative certificate verification overrides. When I last posted, I had gotten this working from the C++ end (assuming that a highly hacky and unclean piece of code counts as “working”). I’ve now created a WebExtensions Experiment that exposes the positive override portion of this API to WebExtensions. (Negative overrides are likely to be basically identical in code structure, I just haven’t gotten to it yet.)

But wait, what’s a WebExtensions Experiment? As you may know, Mozilla deprecated the extension model that’s been used since Firefox was created, in favor of a new model called WebExtensions, which is more cleanly segregated from the internals of Firefox. This has some advantages: it means that WebExtensions can have a permission model rather than being fully trusted with the Firefox internals, and it also means that Firefox internals can change without requiring WebExtensions to adapt. However, it also means that WebExtensions are significantly more limited in what they can do compared to old-style Firefox extensions. WebExtensions Experiments are a bridge between the Firefox internals and WebExtensions. WebExtensions Experiments are old-style Firefox extensions that happen to expose a WebExtensions API. WebExtensions Experiments have all the low-level access that old-style Firefox extensions had; among other things, this means I can access my C++ API from a WebExtensions Experiment written in JavaScript, and expose an API to WebExtensions that allows them to indirectly access my C++ API.

Creating the WebExtensions Experiment was relatively straightforward, given my prior experience with nczilla (remember, a WebExtensions Experiment is mostly just a standard old-style Firefox extension, like nczilla was). I also created a proof-of-concept WebExtension that uses this API to make “Untrusted” errors be ignored. While I was doing this, I also kept track of performance. And therein lies the current problem. When the Experiment simply returns an override without asking the WebExtension, the added overhead is generally 2 ms in the worst case (and often it’s much less than 1 ms). Unfortunately, Experiments and WebExtensions run in separate threads, and the thread synchronization required to get them to communicate increases the overhead by an order of magnitude: the worst-case overhead is around 20 ms (and it’s very rarely less than 6 ms).

My assumption was that there was no way Mozilla would be merging this with that kind of performance issue; this assumption was confirmed by David Keeler from Mozilla.

On the bright side, it looks like there are several options for communicating between threads that should have lower latency (something like 1-2 ms overhead, assuming that documentation is accurate). So I’ll be investigating those in the next week or two.

As an interesting side note, Mozilla informs me that the current method I’m using to communicate between threads shouldn’t be working at all. I’m not sure why it’s working when they say it shouldn’t, but in any event it’s probably a Very Bad Thing ™ to be using patterns that Mozilla doesn’t expect to work at all, even without the latency issue (since such patterns might break in the future, which I certainly don’t want).

This work was funded by NLnet Foundation’s Internet Hardening Fund.

Registering Names with the Raw Transaction API

Sep 28, 2017 • Jeremy Rand

The refactorings to the raw transaction API that I mentioned earlier have been merged to Namecoin Core’s master branch. I’ve been doing some experiments with it, and I used it to successfully register a name on a regtest network with only one unlock of my wallet (which covered both the name_new and name_firstupdate operations).

I also coded support for Coin Control and Fee Control for name registrations, although this code is not yet tested (meaning that if Murphy has anything to say about it, it will need some fixes).

So far the code here is a Python script, so it still needs to be integrated into Namecoin-Qt. Brandon expects this to be relatively straightforward once I hand off my Python code to him. Major thanks to Daniel for getting the API refactorings into Namecoin Core, and thanks to Brandon for useful discussions on how best to structure the code so that it can be integrated into Namecoin-Qt smoothly.

Hopefully I’ll have more news on this subject in the next couple weeks.

This work was funded by NLnet Foundation’s Internet Hardening Fund.

ncdns v0.0.5 Released

Sep 26, 2017 • Jeremy Rand

We’ve released ncdns v0.0.5. List of changes:

Windows installer:

Bundle BitcoinJ/libdohj SPV name lookup client as an alternative to Namecoin Core.

Namecoin TLS for Firefox: Phase 1 (Overrides in C++)

Sep 24, 2017 • Jeremy Rand

Making TLS work for Namecoin websites in Firefox has been an interesting challenge. On the positive side, Mozilla has historically exposed a number of API’s that would be useful for this purpose, and we’ve actually produced a couple of Firefox extensions that used them: the well-known Convergence for Namecoin codebase (based on Convergence by Moxie Marlinspike and customized for Namecoin by me), and the much-lesser-known nczilla (written by Hugo Landau and me, with some code borrowed from Selenium). This was a pretty big advantage over Chromium, whose developers have consistently refused to support our use cases (which forced us to use the dehydrated certificate witchcraft). On the negative side, Mozilla has a habit of removing useful API’s approximately as fast as we notice new ones. (Convergence’s required API’s were removed by Mozilla about a year or two after we started using Convergence, and nczilla’s required API’s were removed before nczilla even had a proper release, which is why nearly no one has heard of nczilla.) On the positive side, Mozilla has expressed an apparent willingness to entertain the idea of merging officially supported API’s for our use case. So, I’ve been hacking around with a fork of Firefox, hoping to come up with something that Mozilla could merge in the future. Phase 1 of that effort is now complete.

In particular, I’ve created a C++ XPCOM component (compiled into my fork of Firefox) that hooks the certificate verification functions, and can produce both positive and negative overrides (meaning, respectively, that it can make invalid certs appear valid, and make valid certs appear invalid). This XPCOM component has access to most of the data that we would want: it has access to the certificate, the hostname, and quite a lot of other data that Firefox stores in objects that the XPCOM component has access to. Unfortunately, it doesn’t yet have access to the full certificate chain (I’m still investigating how to do that), and it also doesn’t yet have access to the proxy settings (I do see how to do that, it’s just not coded yet). The full certificate chain would be useful if you want to run your own CA; the proxy settings would be useful for Tor stream isolation with headers-only Namecoin SPV clients.

Performance is likely to be impacted, since this code is not even close to optimized (nor is performance even measured). I’ll be investigating performance later. Short-term, my next step will be to delegate the override decisions to JavaScript code. (This looks straightforward, but as we all know, our friend Murphy might strike at any time.) After that I’ll be looking at making that JavaScript code delegate decisions to WebExtensions. The intention here is to support use cases besides Namecoin’s. The WebExtensions API that this would expose would presumably be useful for DNSSEC/DANE verification, perspective verification, and maybe other interesting experiments that I haven’t thought of.

Major thanks to David Keeler from Mozilla for answering some questions in IRC that I had about how the Firefox certificate verification code is structured.

Hopefully I’ll have more news on this subject in the next couple weeks.

This work was funded by NLnet Foundation’s Internet Hardening Fund.

Refactoring the Raw Transaction API

Sep 13, 2017 • Jeremy Rand

Several improvements are desirable for how Namecoin Core creates name transactions:

Register names without having to unlock the wallet twice, 12 blocks apart.

Coin Control (a method of improving anonymity).

Specifying a fee based on when you need the transaction to be confirmed.

All of these improvements, though they seem quite different in nature, have one important thing in common: they need to use the raw transaction API in ways that Namecoin Core doesn’t make easy.

What’s the raw transaction API?

Simply put, the raw transaction API is a Bitcoin Core feature that allows the user to create custom transactions at a much lower level than what a typical user would want. The raw transaction API gives you a lot of power and flexibility, but it’s also more cumbersome and, if used incorrectly, you can accidentally lose lots of money.

Why do those features need the raw transaction API?

The raw transaction API is useful for a few reasons. First, it lets you create and sign transactions independently from broadcasting them. That means you can create and sign a name_firstupdate transaction before its preceding name_new transaction has been broadcast. Second, it gives you direct control over the inputs, outputs, and fees for the transaction, whereas the standard name commands instead pick defaults for you that you might not like and can’t change.

What’s wrong with Namecoin Core’s raw transaction API?

It’s not raw enough! First off, the only name transactions it can create are name_update; it can’t create name_new or name_firstupdate transactions. That rules out handling name registrations, and consequently means that anonymously registering names or controlling the fee for name registrations is a no-go. Secondly, it can’t create name_update outputs with a higher monetary value than the default 0.01 NMC. That rules out pure name transactions.

How are we fixing this?

We’re making an API change. Instead of trying to stuff name operation data into createrawtransaction, where it doesn’t belong and where it’s extremely difficult to provide the needed flexibility, we’re removing name support from createrawtransaction and moving it to a new RPC call, namerawtransaction. The new workflow replaces the previous createrawtransaction with two steps:

Use createrawtransaction just like you would with Bitcoin, and specify a currency output of 0.01 NMC for where you want the name output to be.

Use namerawtransaction to prepend a name operation to the scriptPubKey of the aforementioned currency output. This has the effect of converting that currency output into a name output.

What will be the impact?

Users of the raw transaction API will need to update their code. If you’re using the raw transaction API to create currency transactions, then this will actually allow you to delete your Namecoin-specific customizations, since it will be just like Bitcoin again. If you’re using the raw transaction API to create name transactions (this includes people who are doing atomic name trading), you’ll need to refactor your createrawtransaction-using code so that it also calls namerawtransaction. If you’re a hacker who enjoys experimenting, this new workflow will probably be much more to your liking, as it will allow you to do stuff that you couldn’t easily do before. And if you’re just an average user of Namecoin-Qt, you’ll probably like the new features that this enables, such as easier registration of names, better privacy, and lower fees.

Who’s involved in this work?

Jeremy Rand wrote a rough spec for the new API.

Daniel Kraft is implementing the changes to the API.

Jeremy Rand plans to utilize the new API in proof-of-concept scripts for several use cases (including the above 4 use cases).

Brandon Roberts plans to convert Jeremy’s proof-of-concept scripts into GUI features in Namecoin-Qt.

This work was funded in part by NLnet Foundation’s Internet Hardening Fund.

What might come later?

Maybe Namecoin-Qt support for atomic name trading? :)

BitcoinJ support merged into ncdns-nsis

Sep 3, 2017 • Hugo Landau

The ncdns-nsis project, which provides a zero-configuration Windows installer
for Namecoin domain name resolution functionality, has merged SPV support,
implemented via BitcoinJ. This enables Windows machines to resolve .bit
domain names without having to download the full Namecoin chain.

Merging this functionality means that Namecoin domain names can be resolved on
Windows client machines with only a minimal chain synchronization and storage
burden, in exchange for a limited reduction in the security level provided.

Installer binaries will be published in due course as remaining ncdns-nsis
issues are concluded.

To use the BitcoinJ client, run the ncdns-nsis installer and select the SPV
option when asked whether to install a Namecoin node. Run BitcoinJ after
installation and wait for synchronization to complete; .bit name resolution
is then available.

Video, Slides, and Paper from Namecoin at GCER 2017

Aug 31, 2017 • Jeremy Rand

As was announced, I represented Namecoin at the 2017 Global Conference on Educational Robotics. Although GCER doesn’t produce official recordings of their talks, I was able to obtain an amateur recording of my talk from an audience member. I’ve also posted my slides, as well as my paper from the conference proceedings.

It should be noted that, as this is an amateur recording, the audio quality is not spectacular. It should also be noted that the audience is rather different from the audiences for whom I usually give Namecoin talks; as a result, the focus of the content is also rather different from my usual Namecoin talks.

Huge thanks to the staff and volunteers from KISS Institute for Practical Robotics, who organized GCER. Especially big thanks to Roger Clement and Steve Goodgame for giving me excellent feedback on the general outline of my paper and talk. This was an excellent event, and I look forward to the next time I’m able to attend.

ncdns v0.0.4 Released

Aug 24, 2017 • Jeremy Rand

We’ve released ncdns v0.0.4. This release incorporates a bugfix in the madns dependency, which fixes a DNSSEC signing bug. The ncdns Windows installer in this release also updated the Dnssec-Trigger dependency.

TLS Support for Chromium on Windows Released

Aug 3, 2017 • Jeremy Rand

After quite a bit of code cleanup, we’ve released a new beta of ncdns for Windows, which includes Namecoin-based TLS certificate validation for Chromium, Google Chrome, and Opera on Windows. This includes both the CryptoAPI certificate injection and NUMS HPKP injection that were discussed previously. You can download it on the Beta Downloads page. We’ve also posted binaries of ncdns (without install scripts or TLS support) for a number of other operating systems; they’re also linked on the Beta Downloads page.

LibreTorNS Merged to Upstream TorNS

Jul 7, 2017 • Jeremy Rand

Generally I’m of the opinion that it’s better to get patches to other people’s projects merged upstream whenever feasible; accordingly I submitted the LibreTorNS patches upstream. Happily, meejah has merged the LibreTorNS patches to upstream TorNS. That means LibreTorNS is now obsolete, and you should use meejah’s upstream TorNS for testing dns-prop279 going forward.

Huge thanks to meejah for accepting the patch! Also thanks to meejah for the code review – there was a bug hiding in my initial submitted patch, which is one of the reasons I always prefer getting things reviewed by upstream.

Namecoin’s Jeremy Rand will be a speaker at GCER 2017

Jul 6, 2017 • Jeremy Rand

I will be speaking at the 2017 Global Conference on Educational Robotics (July 8-12, 2017). For those of you unfamiliar with GCER, the audience is mostly a mix of middle school students, high school students, college students, and educators, most of whom are participating in the robotics competitions hosted at GCER (primarily Botball). I competed in Botball (and the other 2 GCER competitions: KIPR Open and KIPR Aerial) between 2003 and 2015, and that experience (particularly the hacking aspect, which is actively encouraged in all three competitions) was a major factor in why I’m a Namecoin developer. My talk is an outreach effort, which I hope will result in increased interest in the Botball scene in the areas of privacy, security, and human rights. My talk (scheduled for July 10) is entitled “Making HTTPS and Anonymity Networks Slightly More Secure (Or: How I’m Using My Botball Skill Set in the Privacy/Security Field)”.

Huge thanks to KISS Institute for Practical Robotics (which organizes GCER) for suggesting that I give this talk. Looking forward to it!

One issue I encountered is that TorNS currently implements a rather unfortunate restriction that input and output domain names must be in the .onion TLD. I’ve created a fork of TorNS (which I call LibreTorNS) which removes this restriction. LibreTorNS is currently required for Namecoin usage.

The code is available from the Beta Downloads page. Please let me know what works and what doesn’t. And remember, using Namecoin naming with Tor will make you heavily stand out, so don’t use this in any scenario where you need anonymity. (Also please refer to the extensive additional scary warnings in the readme if you’re under the mistaken impression that using this in production is a good idea.)

A while back, I released on the tor-dev mailing list a tool for using Namecoin naming with Tor. It worked, but it was clearly a proof of concept. For example, it didn’t implement most of the Namecoin domain names spec, it didn’t work with the libdohj client, and it used the Tor controller API. I’ve now coded a new tool that fixes these issues.

Fixing the spec compliance and libdohj compatibility was implemented by using ncdns as a backend instead of directly talking to Namecoin Core. Interestingly, I decided not to do anything Namecoin-specific with this tool. It simply uses the DNS protocol to talk to ncdns, or any other DNS server you provide. (So yes, that means that you could, in theory, use the DNS instead of Namecoin, without modifying this tool at all.)

In order to work with the DNS protocol, some changes were made to how .onion domains are stored in Namecoin. The existing convention is to use the tor field of a domain, which has no DNS mapping. Instead, I’m using the txt field of the _tor subdomain of a domain. This is consistent with existing practice in the DNS world.

This tool is using Tor’s Prop279 Pluggable Naming protocol rather than the Tor controller API. Right now tor (little-t) doesn’t implement Prop279. To experiment with Prop279, TorNS is a useful way of simulating Prop279 support using the control port as a backend. Unfortunately, TorNS’s license is unclear at the moment. I’m in the process of checking with meejah (author of TorNS) to see if TorNS can be released under a free software license. Until that minor bit of legalese is taken care of, the release of Namecoin resolution for Tor is on hold.

This development was funded by NLnet.

Automated Nothing-Up-My-Sleeve HPKP Insertion for Chromium

Jun 20, 2017 • Jeremy Rand

As I mentioned in my previous post, we protect from compromised CA’s by using a nothing-up-my-sleeve (NUMS) HPKP pin in Chromium. Previously, it was necessary for the user to add this pin themselves in the Chromium UI. This was really sketchy from both a security and UX point of view. I have now submitted a PR to ncdns that will automatically add the necessary pin.

This is implemented as a standalone program that is intended to be run at install. It works by parsing Chromium’s TransportSecurity storage file (which is just JSON), adding an entry for bit that contains the NUMS key pin, and then saving the result as JSON back to TransportSecurity.

I expect this to work for most browsers based on Chromium (e.g. Chrome and Opera), on most OS’s (e.g. GNU/Linux, Windows, and macOS), but so far I’ve only tested with Chrome on Windows. I don’t expect it to work with Electron-based applications such as Riot and Brave; Electron doesn’t seem to follow the standard Chromium conventions on HPKP. I haven’t yet examined Electron to see if there’s a way we can get it to work.

This isn’t yet integrated with the NSIS installer; I’ll be asking Hugo to take a look at doing the integration there.

Progress on Electrum-NMC

Jun 17, 2017 • ahmedbodi

Work on the electrum port for Namecoin has been moving along nicely. It was decided that we will use the electrum-client from spesmilo, along with the electrumX server. ElectrumX was chosen due to the original electrum-server being discontinued a few months ago.
So far the electrum client has been ported over for compatability with electrumX. This includes the re-branding, blockchain parameters and other electrum related settings for blockchain verification

On the roadmap now are:

Extend electrumX NMC Support to allow for full veritification of AuxPow

Modify new electrum client to verify the new AuxPow

Add Name handling support to electrum

These repo’s are for testing purposes only. Do not use these unless your willing to risk losing funds.

How we’re doing TLS for Chromium

Jun 15, 2017 • Jeremy Rand

Back in the “bad old days” of Namecoin TLS (circa 2013), we used the Convergence codebase by Moxie Marlinspike to integrate with TLS implementations. However, we weren’t happy with that approach, and have abandoned it in favor of a new approach: dehydrated certificates.

What’s wrong with Convergence? Convergence uses a TLS intercepting proxy to replace the TLS implementation used by web browsers. Unfortunately, TLS is a really difficult and complex protocol to implement, and the nature of intercepting proxies means that if the proxy makes a mistake, the web browser won’t protect you. It’s fairly commonplace these days to read news about a widely deployed TLS intercepting proxy that broke things horribly. Lenovo’s SuperFish is a well-known example of a TLS intercepting proxy that made its users less safe.

Convergence was in a somewhat special situation, though: it reused a lot of code that was distributed with Firefox (via the js-ctypes API), which reduced the risk that it would do something horribly dangerous with TLS. It was also originally written by Moxie Marlinspike (well-known for Signal), which additionally reduced the risk of problems. Unfortunately, Mozilla stopped shipping the relevant code with Firefox, Moxie stopped maintaining Convergence, and Mozilla decided to deprecate additional functionality that was used by Convergence. This made it clear that the Convergence codebase wasn’t going to be viable, and that if we wanted to use an intercepting proxy, we’d be using a codebase that was substantially less reliable than Convergence.

So, we went back to the drawing board, and came up with a new solution.

As a first iteration, TLS implementations have a root CA trust store, and injecting a self-signed end-entity x509 certificate into the root CA trust store will allow that certificate to be trusted for use by a website. (For an explanation of how this can be done with Windows CryptoAPI, see my previous post, Reverse-Engineering CryptoAPI’s Certificate Registry Blobs). We can do this right before the TLS connection is opened by hooking the DNS request for the .bit domain (this is easy since we control the ncdns codebase that processes the DNS request).

However, there are three major issues with this approach:

An x509 certificate is very large, and storing this in the blockchain would be too expensive. Using a fingerprint would be much smaller, but we can’t recover the full certificate from just the fingerprint.

x509 certificates might be valid for a different purpose than advertised. For example, if we injected a certificate that has the CA bit enabled, the owner of that certificate would be able to impersonate arbitrary websites if they can get you to first visit their malicious .bit domain. x509 is a complex specification, and we don’t want to try to detect every possible type of mischief that can be done with them.

Injecting a certficate doesn’t prevent a malicious CA that is trusted for DNS names from issuing fraudulent certificates for .bit domain names.

Problems 1 and 2 can be solved at the same time. First, we use ECDSA certificates instead of the typical RSA. ECDSA is supported by all recent TLS implementations, but RSA is largely dominant because of inertia and the prevalence of outdated software. By excluding the old software that relies on RSA, we get much smaller keys and signatures. (Old software isn’t usable with this scheme anyway, because of our solution to Problem 3.)

Next, instead of having domain owners put the entire ECDSA x509 certificate in the blockchain, the domain owner extracts only 4 components of the certificate: the public key, the start and end timestamps for the valdity period, and the signature. As long as the rest of the certificate conforms to a fixed template, those 4 components (which we call a dehydrated certificate) can be combined with a domain name and the template, and rehydrated into a fully valid x509 certificate that can be injected into the trust store. This technique was invented by Namecoin’s Lead Security Engineer, Ryan Castellucci.

It should be noted that a dehydrated certificate can’t do any mischief such as being valid for unexpected uses; all of the potentially dangerous fields are provided by the template, which is part of ncdns. The dehydrated data is also quite small: circa 252 bytes (we can probably shrink it further in the future). Implementing this in ncdns was a little bit tricky, because the Go standard library’s x509 functions that are needed to perform the signature splicing are private. I ended up forking the Go x509 package, and adding a public function that exposes the needed functionality. (Before you freak out about me forking the x509 package, my package consists of a single file that contains the additional function, and a Bash script that copies the official x509 library into my fork. It’s reasonably self-contained and shouldn’t run into the issues that more invasive forks encounter regularly.)

So what about Problem 3? Well, for this, I abusetake advantage of an interesting quirk in browser implementations of HPKP (HTTPS Public Key Pinning). Browsers only enforce key pins against certificates for built-in certificate authorities; user-specified certificate authorities are exempt from HPKP. This behavior is presumably to make it easier for users to intentionally intercept their own traffic (or for corporations to intercept traffic in their network, which is a less ethical version of a technologically identical concept). As such, I believe that this behavior will not go away anytime soon, and is safe to rely on. The user places a key pin at the bit domain, with subdomains enabled, for a “nothing up my sleeve” public key hash. As a result, no public CA can sign certificates for any domain ending in .bit, but user-specified CA’s can. Windows CryptoAPI treats user-specified end-entity certificates as user-specified CA’s for this purpose. As such, rehydrated certificates that ncdns generates will be considered valid, but nothing else will. (Unless you installed another user-specified CA on your machine that is valid for .bit. But if you did that, then either you want to intercept .bit, in which case it’s fine, or you did it against your will, in which case you are already screwed.)

I’ve implemented dehydrated certificate injection as part of ncdns for 2 major TLS implementations: CryptoAPI (used by Chromium on Windows) and NSS (used by Chromium on GNU/Linux). These are currently undergoing review as pull requests for ncdns. (The macOS trust store should also be feasible, but I haven’t done anything with it yet.) Right now, those pull requests prompt the user during ncdns installation with instructions for adding an HPKP pin to Chromium. (If you’ve tried out the ncdns for Windows installer on a machine that has Chromium, you might have noticed this dialog.) This isn’t great UX, and I’ve found a way to do this automatically without user involvement, which I will be implementing into the ncdns installer soon.

Unfortunately, abusing HPKP in this way isn’t an option in Firefox, because Mozilla’s implementation of HPKP doesn’t permit key pins to be placed on TLD’s. (As far as I can tell, the specifications seem to be ambiguous on this point.) Mozilla does offer an XPCOM API for HPKP (specifically, nsISiteSecurityService) that can inject key pins for individual .bit domains on the fly, but since XPCOM is deprecated by Mozilla, this is not a viable option as-is. On the bright side, Mozilla seems interested in implementing the API’s we need to do this in a less hacky way, so I’ll be engaging with Mozilla on this.

As another note: NSS trust store injection is rather slow right now, because I’m currently using NSS’s certutil to do the injection, and certutil isn’t properly optimized for speed. Sadly, there doesn’t seem to be an easy way of bypassing certutil for NSS like there is for CryptoAPI (NSS’s trust store is a sqlite database with a significantly more complex structure than CryptoAPI, and the CryptoAPI trick of leaving out all the data besides the certificate doesn’t work for NSS). I will probably be filing at least one bug report with NSS about certutil’s performance issues. If progress on fixing those issues appears to be unlikely, I think it may be feasible to do some witchcraft to speed it up a lot, but I’m hoping that things won’t come to that.

I’m hoping to get at least the CryptoAPI PR merged to official ncdns very soon, at which point I’ll ask Hugo to release a new ncdns for Windows installer so everyone can have fun testing.

ncdns for Windows Beta Installer Now Available

May 30, 2017 • Hugo Landau

In order to facilitate the easy resolution of .bit domains, an installer for
ncdns for Windows has been under development. This installer automatically
installs and configures Namecoin Core, Unbound and ncdns.

An experimental binary release for this installer is now available. Interested
Namecoin users are encouraged to test the installer and report any issues they
encounter.

This release does not yet integrate the TLS integration support for ncdns under
development by Jeremy Rand. This will be incorporated in a future release.

The development of this installer was funded by NLnet.

Reverse-Engineering CryptoAPI’s Certificate Registry Blobs

May 27, 2017 • Jeremy Rand

Every so often, I’m doing Namecoin-related development research (in this case, making TLS work properly) and I run across some really interesting information that no one else seems to have documented. While this post isn’t solely Namecoin-related (it’s probably useful to anyone curious about tinkering with TLS), I hope you find it interesting regardless.

A note on the focus here: while this research was done for the purpose of engineering specific things, I’m writing it from more of a “basic research” point of view. My dad’s career was in basic research, and I firmly believe that learning cool stuff for the sake of learning it is a worthwhile endeavor, regardless of what the practical applications are (and indeed, usually when basic research turns out to have applications, which is commonplace, the initial researchers didn’t know what those applications would be). Since I’m an engineer, there will be a bit of application-related commentary here, but don’t read this expecting it to be a summary of the next Namecoin software release’s feature set or use cases.

In Windows-based OS’s, most applications handle certificates via the CryptoAPI. CryptoAPI serves a somewhat similar role in Windows certificate verification as OpenSSL does on GNU/Linux-based systems. Notably, Mozilla-based applications like Firefox and Thunderbird don’t use CryptoAPI (nor OpenSSL); they use the Mozilla library NSS (on both Windows and GNU/Linux). However, except for Mozilla applications, and a few applications ported from GNU/Linux (e.g. Python) which use OpenSSL, just about everything on Windows uses CryptoAPI for its certificate needs. CryptoAPI is a quite old Microsoft technology; it dates back at least to Windows NT 4. (It might be even older, but I’ve never touched nor read about any of the earlier incarnations of Windows NT, so I wouldn’t know.) Like any other codebase that’s been around for over 2 decades, its design is somewhat convoluted, and my guess is that if it were being designed from scratch today, it would look very different.

CryptoAPI maintains a bunch of different stores for certificates. These stores are designated according to the certificate’s intended usage (e.g. a website cert, an intermediate CA, a root CA, a personal cert, and a bunch of other use cases that I don’t understand because I’ve never managed any kind of certificate infrastructure for an enterprise), the method by which the certificate was loaded (e.g. by a web browser cache, by group policy, and a bunch of other methods that, again, I don’t understand because I don’t do enterprise infrastructure), which users have permission to use the certs (roaming profiles have special handling), and even which applications are expected to consider them valid (Cortana, Edge, and Windows Store all have their own certificate stores, for reasons that I don’t understand in the slightest, although I do wonder whether adding an intercepting proxy to Cortana’s cert store would be useful in an attempt to wiretap Cortana and see what data Microsoft collects on its users). You can see a subset of the certificate stores’ contents via certmgr.msc, and there’s a command-line tool included with Windows called certutil which can edit or dump this data as well. Neither of these tools actually shows all of the stores, e.g. Cortana, Edge, and Windows Store are secret and invisible. Also, don’t confuse the CryptoAPI certutil with the Mozilla command-line tool also called certutil, which is similar but is for NSS stores and has an entirely different syntax.

Incidentally, CryptoAPI has some interesting behavior when it comes to root CA’s. If you add a self-signed website cert to a root CA store, that self-signed website cert becomes fully trusted (HSTS and HPKP even work, which implies that it doesn’t get reported as an override). Of course, this is usually a dangerous idea, since that self-signed website could then sign other websites’ certs – you did tell Windows to treat it as a root CA, after all. But Windows actually does respect the CA and CertUsage flags in this case: if you construct a cert that is not valid as a CA, Windows will happily let you add it to a root CA store, will accept it as a website cert, but will refuse to trust any other cert signed by that cert. Namecoin lead security engineer Ryan Castellucci told me on IRC that he’s not sure if this behavior is even defined in a spec, but in my testing, NSS seems to exhibit identical behavior (no idea about OpenSSL). Regardless of specs, Microsoft has a fanatical obsession with not changing behavior of any public-facing API that might impact backwards compatibility (to Microsoft, the original implementation is the spec), so I think it’s probably pretty safe to rely on this behavior, even when someone as thoroughly knowledgeable as Ryan has never encountered anything in the wild that does this. Of course, that’s just my assessment – I take no responsibility if this burns you. As they say on Brainiac: Science Abuse, “we do these experiments so you don’t have to – do not try this at home – no really, don’t.”

Now, unfortunately, CryptoAPI has a problem. It expects a user to have administrator privileges in order to add a cert to most of the stores. This is probably well-meaning, because you definitely don’t want some random piece of malware that abused a Javascript zero-day to be able to add a root CA, or anything like that. (Fun fact: any such malware can, however, add a root CA to Firefox, because the NSS cert stores are simply a file in your profile directory, and are therefore user-writeable. That’s even true for Firefox on many GNU/Linux systems, even though the OpenSSL store is protected.) Of course, the security benefits of requiring privileged access for this are dubious, given that malware running as the primary user can do all sorts of other mischief, such as replacing the shortcut to your browser with a patched version that MITM’s you. However, regardless of the alleged security benefit of this policy, there’s a fairly obvious problem here: this implies that if you want to run software that programatically adds root CA’s, perhaps for the use case in the previous paragraph, you need to give that software Administrator privileges. As a (minimally) sane person, running anything, much less a daemon that interacts with a bunch of untrusted network hosts (e.g. Namecoin peers), as an administrator is an absolute dealbreaker. Yes, I did code it that way as a proof of concept for the hackathon by the College Cryptocurrency Network that I got 3rd place in, but no way in hell am I going to ship software to end users that does such irresponsible things. And if you’re the kind of person who would be tempted to do that, please, for the sake of your users, exit the software development field before you get some dissident or whistleblower murdered. This stuff actually is important to those of us with ethics.

You might wonder: why the heck isn’t there a permission system for this? Coming from a culture that loves the concept (if not implementation) of things like AppArmor and SELinux, that was certainly my thought. But alas, I was unable to find any Microsoft documentation that suggested a way to delegate access to a specific cert store to some other user. (Of course, Microsoft’s documentation is a train wreck, so maybe they did address this use case and I just couldn’t find any mention of it.) However, I did learn something interesting by Googling. While OpenSSL cert stores are just a filesystem folder, and NSS cert stores are a database file (whose database backend is either BerkeleyDB or SQLite), CryptoAPI mostly uses… the Windows Registry. Remember, this is Microsoft, they dump their garbage in the registry with as little hesitation as petroleum companies dump their garbage in Latin-American rainforests. (Personal certificates that are part of roaming profiles are instead placed in a user’s profile folder, apparently ever since Windows 2000 came out. But almost everything else is in the registry.) Since the registry does have a permission system, this looks like the perfect solution.

It was relatively easy to figure out where these certificates are located in the dense, uncharted jungle that is the registry. Indeed, you can search your registry for keys titled Root and you’ll find all the root CA stores (the other types of stores are in sibling keys). Each certificate is located in its own subkey (the subkey is named based on the certificate’s SHA-1 fingerprint). Actually, let me digress for a moment. Why the hell is Microsoft using SHA-1 hashes as the names of registry keys, even in Windows 10? Yes, I know SHA-1 was not known to be weak when Microsoft designed CryptoAPI, but tying the name of something to a specific hash algorithm seems like a massively stupid idea in terms of design and safety. (And no, it’s not a good idea to drive drunk just because your crazy git uncle Linus does it every New Year’s Eve and hasn’t died yet.) Anyway, inside that subkey is a single value, called Blob, which contains binary data encoding the certificate. Not too complicated, right?

Oh, wait. We’re talking about Microsoft. Everything is complicated, usually for no discernable reason whatsoever. Also, the most complicated things usually have the least documentation. I know people who have long-ago adopted a policy of getting their Windows documentation from the ReactOS source code instead of the Microsoft website, because a small, minimally funded project that’s reverse-engineering everything writes more accurate documentation than the wildly successful company who actually engineered the system and wrote the original source code. Anyway, I looked at the contents of the binary blob in the registry, and noticed that it didn’t look right. More specifically, it wasn’t a DER-encoded x.509 structure, nor was it even PEM-encoded. Actually, there was a substring that did correspond to the DER-encoded x.509 structure, but there was a crapload of extra data too. For reference, it looked like this (in .reg format):

Just to see what would happen, I took a raw DER-encoded x.509 certificate and shoved it into the registry to see if CryptoAPI would accept it, but of course it didn’t, so I needed to figure out what that extra data was. Now, as a middle school student, as a high school student, and as an undergraduate student (though sadly not as a graduate student), I did a lot of reverse-engineering of various binary formats (making a Mega Drive Zero Wing ROM include custom dialogue between “Cracker” and “Admin” about how “All your 802.11b are belong to us” and that “You are on the way to DDoS microsoft.com” was a few days of work in 7th grade). So I was quite ready to go that route. However, I learned long ago that it’s always better to spend a few hours on Google to see if someone else has already done your dirty work for you, because usually someone has. So I did that.

The first result I found was a Microsoft mailing list thread from 2002 where Mitch Gallant and Rebecca Bartlett both inquired about this format. Microsoft’s response was to refuse to provide any documentation, and generally be rude and unhelpful, on the grounds that this use case was unsupported. Now, hang on, because I need to point something out. I’ve asked a lot of software vendors for obscure technical information about their products, nearly all of which were for unsupported use cases. By far my favorite vendor to work with in this area was the robotics hardware company Charmed Labs (they are incredibly nice and helpful, even happily giving me proprietary source code that was protected by patents, for me to do whatever I wanted with, as long as I didn’t distribute or use commercially). But generally speaking, the “good” companies’ responses to such requests follow this kind of formulation:

What you’re trying to do is not something that we officially support.

We think what you’re doing is a bad idea for these reasons.

There may be other reasons that it’s a bad idea, which we haven’t thought of.

We think the “right” way is something else, and we’re happy to help you with that method if you like.

Regardless, here are all the answers to your questions.

If you have any other questions about this unsupported use case, we’ll try to answer, as long as it doesn’t become a time sink for us. Don’t expect super-quick replies, because we’re looking up answers in our spare time.

If any of the info we’re providing turns out to be incomplete or wrong, or you end up getting burned in any other way for doing this, we’re not responsible.

Whenever I’ve gotten a reply like this (and it’s happened reasonably often), I was a happy camper, because I was able to make an informed decision. Sometimes I decided to abandon my quest; other times I disregarded the warning and pressed on anyway. In the latter case, I knew full well what I was getting into, because the vendor had given me sufficient information and context for me to make up my mind. Usually, I was satisfied with my decision at the end of the day. In fact, I cannot remember a case where I did something I seriously regretted after being warned against it like that. I’m sure it could have happened, had the quantum noise been different, but if that had come to pass, I’m confident that I wouldn’t have blamed the vendor for giving me information coupled with advice that I ignored. There were several times where my disregard for the warning resulted in some lost development time or temporary confusion, but seriously, who could possibly be angry for the chance to gain practical experience in an unfamiliar area, particularly given that when I did decide to change course, I now had both the vendor’s expert recommendations and my new practical experience to inform my decision. What more could anyone want? My point is, the good software vendors treat their users like real, sentient people when they ask for information, while the bad software vendors (Microsoft included) treat their users the way that the owners of Number 4 Privet Drive treated their nephew up until mid-1991: Don’t ask questions!

Technically, Microsoft did provide parts (1) through (4) of the above form, but they don’t even qualify for partial credit here, because the reason they gave for (2) is atrocious on its face: once in 2 decades, they moved a store from the registry to the filesystem to handle roaming profiles, and anyone using the registry directly would have had to update their software (with a really minor change) once Windows 2000 came out. Frankly, if you’re unwilling to update your software once in 2 decades, I don’t know why you’re in this field, and you should go get a Ph.D. in Latin Literature so that you don’t need to ever learn anything new in order to stay at the top of your field for your whole life. Anyway, reading that thread was entirely unhelpful, except for the fact that it told me I had made a great choice never interviewing at Microsoft, because if I ever become the kind of tech support robot who shows up in that thread, someone please kill me.

So, I continued to look through Google for a while. And I did find two people who had actually tried to reverse-engineer the blob format. Tim Jacobs had posted some information in 2008, and Willem Jan Hengeveld had posted some other info in 2003. Interestingly, both Tim and Willem had been looking into dealing with bugs in mobile versions of Windows that made it hard to import a certificate any other way. (See, basic research has diverse, and often non-obvious, use cases.) Tim’s documentation wasn’t particularly helpful for me, because while it explained how to solve a specific problem (which wasn’t the problem I had), it didn’t really explain why that solution worked, nor how he figured it out (actually, I’m very skeptical of why his solution even works, and based on my research below, I suspect that he simply got lucky and that his method will spectacularly break in many real-world scenarios). However, Willem’s documentation was very helpful. According to Willem, a cert blob is a sequence of records, each of which consists of a 4-byte propid (which I gather means “property ID”), a 4-byte unknown value (which I assume is reserved by Microsoft for future expansion, since everything I encountered used exactly the same value), a 4-byte size, and then the raw data for that property (whose size in bytes was specified by the size field). Willem also listed the common property ID’s that show up.

There was just one problem: the blob I was looking at had a bunch of property ID’s that weren’t in Willem’s list. So, of course, I Googled for one of the property ID’s that was in Willem’s list, and I ended up finding this page and this page on the Microsoft website, which had a (mostly) complete list. Except… those references had descriptions (which, admittedly, is nice) but not any info on what numerical values the property ID’s were. However, they did include a reference to Wincrypt.h. Ah ha, I thought, I’ve done this before! So I went and looked up that header file in the MinGW source code, and was treated to a complete list of the numerical values of all the property ID’s.

From there, I started gathering a list of which property ID’s Windows seemed to be using, so that I could generate the appropriate information while inserting a cert, given only its DER x.509 encoding. Unfortunately, quite a lot of the property ID’s were for things that looked quite annoying to calculate. After trying to figure out a way to calculate a “signature hash” of an x.509 cert in Golang, and not having any fun whatsoever (mind you, I did find ways to do it, I just knew I would despise the process of coding it, and of ever looking at the horrible code that was bound to result), a thought crossed my mind: What does CryptoAPI do if some of the properties are missing from the blob, as long as the record format is correct? So, I took the blob that Windows had generated, and I wiped everything except for the x.509 cert itself and the 12-byte header for that property. I inserted it into the registry, and visited the corresponding website in Chrome. The website loaded just fine! Then I went back to the registry editor, and refreshed, and was quite surprised to see that the moment that CryptoAPI had validated the cert, it had re-calculated all of the other missing fields, and inserted them into the registry.

So, basically, all of those other properties are, as best I can tell, just an elaborate caching mechanism, completely superfluous for proper operation. Microsoft made CryptoAPI substantially more complex, added at least 4 public-facing API functions (those are just the ones I accidentally ran across), and invented a custom, undocumented binary blob format, all so that they could avoid doing a couple of extra hash operations when verifying a chain that included a previously seen certificate. (Remember, hash operations are fast, while RSA and ECDSA, which aren’t cached here and are still needed to verify cert chains, are slow.)

Typical Microsoft. slow clap

Thanks goes to ncdns developer Hugo Landau and Monero developer Riccardo Spagni for keeping me company on IRC while I figured all of the above out. What does this have to do with Namecoin? You’ll find out in my next post.

Progress on ncdns-nsis

May 26, 2017 • Hugo Landau

Development nears completion on the NSIS-based Namecoin and ncdns bundle
installer for Windows.

The ncdns-nsis repository provides
source code for an NSIS-based installer which can automatically install and
configure Namecoin Core, ncdns and Unbound and configure name resolution of
.bit domains via Unbound.

The installer can install Namecoin Core and Unbound automatically, but also
allows users to opt out of the installation of these components if they wish to
provide their own.

Completion of the ncdns-nsis installer project will enable the Namecoin project
to distribute a Windows binary installer providing a turnkey,
configuration-free solution for .bit domain resolution. The installer is also
intended to support reproducible builds and can be built from a POSIX system.

At this point, extensive testing is the primary work remaining on the completion
of the ncdns-nsis installer.

We’re happy to announce that Namecoin is receiving 29,895 EUR in funding from NLnet Foundation’s Internet Hardening Fund. If you’re unfamiliar with NLnet, you might want to read about NLnet Foundation, or just take a look at the projects they’ve funded over the years (you might see some familiar names). The Internet Hardening Fund is managed by NLnet and funded by the Netherlands Ministry of Economic Affairs. The funding will be used to fund 4 Namecoin developers (Jeremy Rand, Hugo Landau, Brandon Roberts, and Joseph Bisch) to produce a usable decentralized TLS public key infrastructure.

Specifically, the following areas of development will be funded:

Integration with DNS functionality of major operating systems. We intend to support GNU/Linux and Windows, including DNS integration for Tor. Other operating system support may be developed if things go well.

Integration with TLS certificate validation functionality of major web browsers. We intend to support Chromium, Firefox, and Tor Browser on GNU/Linux and Windows. Other browser support may be developed if things go well.

Improvements to the lightweight SPV name lookup client.

A lightweight SPV wallet with name support. We intend to use Electrum.

Wallet GUI improvements, including Coin Control for name transactions and a name update GUI that doesn’t require knowing JSON.

Improved installation automation. We intend to provide a Windows installer that includes a Namecoin client, DNS integration, and TLS integration. Other OS support may be developed if things go well.

We’d like to thank the awesome people at NLnet Foundation for selecting us for this opportunity, as well as the Netherlands Ministry of Economic Affairs for recognizing that a hardened Internet is worth receiving government financial support.

We’ll be posting updates regularly as development proceeds. (Spoiler alert: a few components are already nearly ready for beta releases.)

ICANN58 Summary

Apr 17, 2017 • Jeremy Rand

As was announced, I represented Namecoin at ICANN58 in Copenhagen. Below is a brief summary of how it went.

A significant number of people in the ICANN community are interested in Namecoin.

While I have not attended previous ICANN events and therefore cannot evaluate this myself, my understanding is that the EIT panel session had an unusually large audience.

There is skepticism in the ICANN community of Namecoin’s ability to completely replace the DNS.

By far the most common reason for this skepticism is the concern that Namecoin may not be able to scale to DNS’s usage levels.

I fully agree that this is a good reason to be skeptical and that work needs to be done in this area.

Another concern raised was Namecoin’s lack of privacy in its current form (specifically the risk of transaction graph analysis).

The people who raised this concern appear to be satisfied that the Namecoin developers understand that this is a problem and that we intend to fix it. If we fail to fix it adequately, this concern is likely to become more of a big deal.

The ICANN community appears to be reasonably accepting of Namecoin’s role as an alternative to DNS; Namecoin makes different tradeoffs from DNS, is therefore likely to be optimal for a different userbase, and can co-exist with DNS in its current state.

Several people I met are interested in assisting Namecoin; we are following up with those people.

I ran out of business cards in my wallet 3 times in 3 days. Luckily, I carry a large stash of business cards with my travel laptop, so everyone who requested my business card received it.

My wallet is currently sufficiently full of business cards from ICANN58 attendees that I’m having trouble easily fitting my credit card into my wallet.

The joint meeting of ICANN’s Security and Stability Advisory Committee (SSAC) and the ICANN board included a segment on Special-Use Names and name collisions. For those who are unaware, this is of interest to Namecoin because it would be problematic for Namecoin if ICANN were to allow someone to purchase .bit as a standard DNS TLD.

The discussion of collisions between non-DNS names (such as Namecoin, though Namecoin wasn’t explicitly mentioned) and DNS names (such as if ICANN were to issue the .bit TLD to someone) begins at timestamp 42:25. I highly recommend watching the full segment, but some highlights include:

Recognize that name collisions will always be with us, and they’re not going to go away. There’s no way to control how people use names.

It’s important to control the things that you can control: make sure that the parts of the namespace that ICANN controls are predictable (harmonize with private-use names). We need to allow private-use names to exist, in the spirit of innovation.

Since we recognize that we are not the only ones who have names that will look like TLD names, and the community is going to use that kind of stuff in an interesting way, we need to have procedures for dealing with other bodies who are going to be creating special-use names for their own purposes. It is important to establish regular communication, how we each recognize each other, how we’re going to work together, and set ourselves up for potentially others (besides IETF) who may want to create lists of names. Be prepared to deal with other groups who are going to have their own lists of names.

Steve Crocker (chair of the ICANN board) said the following:

The IETF has a special names list, a reserved names list, but my understanding is that that’s not a definitive list in the following sense. It takes a while before a name gets onto that list. So, it tends to be on the conservative side. There are other names that are in use but have not gone through an IETF process. From where we’re sitting over at ICANN, if we want to be conservative, we would take into account not only the names on the reserve list from the IETF but also other names where it’s evident there is usage but nobody has come along and said we’re going to – you should reserve this and reserve this and so forth. So, I would think that our obligation is to have a somewhat wider field of view, including not only the official list but also what’s actually happening in the real world. And I can anticipate arguments that say well, there’s no official reason to reject this name [for ICANN issuance, e.g. someone buying the .bit TLD for non-Namecoin use], therefore you must accept it. I would say just the opposite, that we have an obligation to be careful, and if we see reasons why a name should not be allocated, then we have that authority, we have that obligation to do that and to err on the side of caution there.

I consider this an extremely good sign.

In response to a question in the Public Forum 2 about whether ICANN was looking into adopting Namecoin, Steve Crocker (chair of the ICANN board) commented “These things take time.” The full question and answer are in the ICANN transcript, pages 25-28. Steve’s comment is, in my opinion, a completely reasonable response.

We plan to continue engaging with the ICANN community.

We plan to continue engaging with IETF on Special-Use Name registration.

At this time, I have no reason to expect any hostile action by ICANN toward Namecoin.

As with other conferences, I won’t be releasing details of private conversations, because I want people to be able to talk to me at conferences without being worried that off-the-cuff comments will be publicly published. That said, all of the private conversations I engaged in were highly encouraging.

Huge thanks to David Conrad (ICANN CTO) for inviting me to attend ICANN58, and to Adiel Akplogan (ICANN VP of Technical Engagement) for inviting me to the EIT Panel. Also thanks to ICANN for covering my travel expenses. I hope we can do this again sometime.

QCon London 2017 Summary

Apr 2, 2017 • Jeremy Rand

As was announced, I represented Namecoin at QCon London 2017. Below is a brief summary of how it went.

The theme of the blockchain track was “Beyond the Hype”. As such, the presentations in the track primarily focused on all the things that can go wrong when using a blockchain. Riccardo Spagni (AKA fluffypony of Monero) is definitely an ideal person to host this track. My talk was on alternate blockchains, with a focus on Namecoin (and some Monero). In the spirit of going “Beyond the Hype”, my talk was almost entirely about things that can go wrong when using a non-Bitcoin blockchain.

I think this is a very important theme for a blockchain track, because the hype attached to the blockchain field is seriously problematic for our field’s credibility. None of us were there to sell our technology, attract investors, or grab media attention – we were there to provide a reality check for an audience who, in large part, had minimal exposure to blockchain technology and wanted to learn more about what use cases it’s good or bad for.

Lots of attendees talked with me over dinner and in the hallway, and I’ll be following up with them ASAP. After QCon, I attended a talk Riccardo gave at Imperial College; several people there were interested in Namecoin. I met up with Riccardo the next day to discuss lots of cool stuff involving Namecoin and Monero.

As with other conferences, I won’t be releasing details of private conversations, because I want people to be able to talk to me at conferences without being worried that off-the-cuff comments will be publicly published.

Huge thanks to Riccardo for inviting me, and to all the QCon conference organizers for an awesome conference (and for covering my travel expenses). It’d be awesome if we can do this again.

Namecoin Core 0.13.99-name-tab-beta1 Ready for General Use

Mar 20, 2017 • Jeremy Rand

Namecoin Core 0.13.99-name-tab-beta1, which has been listed on our Beta Downloads page for a few months, has demonstrated itself to be stable enough that it is now listed on the main Downloads page. Huge thanks to our Lead C++ GUI Engineer Brandon Roberts for his work on this.

Namecoin’s Jeremy Rand will be a speaker at ICANN58

Mar 3, 2017 • Jeremy Rand

ICANN has invited a Namecoin developer to speak at the ICANN58 meeting on March 11-16 2017 in Copenhagen, and I’m happy to say that I’ve accepted their invitation. Since I’m well aware that this may be surprising to some readers, I think it’s beneficial to everyone to announce it here, and give some details about why I’ll be attending.

The rest of this post will be in the excellent Q&A-style format.

Why did ICANN invite you?

To my understanding, I was invited because of a perception that there was a lack of understanding and dialogue between ICANN and Namecoin about specifically what the goals of each group were. The hope is that by encouraging discussion between ICANN and Namecoin, the groups will have a more accurate idea of what the other is doing and what common interests we might have.

It’s not news to me that ICANN has an interest in Namecoin; an ICANN panel report favorably mentioned us. However, I admit that I was (pleasantly) surprised to receive this invitation.

What do you think of ICANN and DNS?

To be totally honest, I’m not really very knowledgeable about how ICANN operates. I hope to gain some knowledge on this subject at the meeting. That said, I’ve heard that ICANN has some political issues. (Indeed, if James Seng’s comments in the aforementioned report are any indication, this is a recognized issue by ICANN participants, not just from the outside.) This is really not surprising, and as far as I know, it’s not due to any kind of nefarious motivation by ICANN or any people within ICANN. My take is that ICANN’s political issues are likely to be simply because ICANN is very large, and large centralized entities are inevitably going to have political issues. If, in an alternate reality, OpenNIC were wildly successful and ended up as large as ICANN is today, I predict that OpenNIC would end up with political issues too.

Isn’t that just ICANN’s own fault for being centralized?

That’s not ICANN’s fault, it’s the reality of the laws of math. When DNS and ICANN were created, everyone believed that decentralized global consensus was impossible (and this belief was well-supported by a proof by Lamport dating back to the 1970’s). It wasn’t until Satoshi Nakamoto invented Bitcoin that anyone had any credible reason to believe that decentralized global consensus was solvable, and it wasn’t until Appamatto and Aaron Swartz proposed BitDNS and Nakanames 2 years later that anyone really seriously considered applying a Nakamoto blockchain to a DNS-like system.

But Namecoin exists now; doesn’t that make DNS obsolete?

Not really. Namecoin makes a number of design tradeoffs in order to achieve decentralization. Compared to DNS, Namecoin has significantly worse security against run-of-the-mill malware, significantly worse privacy against your nosy friends/neighbors/employer, and significantly worse resistance to squatting and trademark infringement, to list just a few. These are open research problems for Namecoin-like systems, whereas DNS has long ago solved them. I work on Namecoin because Namecoin also has some advantages over DNS, and I think there is a significant user base who want those advantages enough that they are willing to cope with the downsides. But that doesn’t mean that DNS is obsolete, or that I expect Namecoin to replace DNS anytime soon. If, in the future, Namecoin eventually solves those open research problems, and as a result replaces DNS, that’d be cool as heck from my point of view, but if that ever happens, I think it will be far enough in the future that it’s not worth worrying about right now.

Namecoin has almost no funding; if you had the budget of the DNS industry, wouldn’t those open research problems have been solved by now?

That would be inconsistent with the definition of “open research problem”. Funding would certainly help us spend more time tackling those problems, but there’s no guarantee that the problems are even solvable. Also, since no one is offering to give us such a budget, there’s not really much point in speculating here.

Are you being paid to attend?

ICANN is covering my travel expenses. (Naturally, I wasn’t going to ask NMDF to pay for me to travel. We don’t have anywhere near enough funding for that.) Other than that, I’m not being paid to attend.

Has ICANN asked for any control or influence on Namecoin?

Of course not. (And if they did, I would decline – as I assume would the other devs.) It’s entirely standard to talk to people working on related projects; it doesn’t imply any desire to influence or control those projects.

Are you concerned that this will be spun by market manipulators as some kind of sell-out?

I’m reasonably confident that market manipulators will try to profit by spinning this in some way, but that’s not anything new. We’ve already seen market manipulators try to make money by alleging a sell-out, based on everything from our application to Google Summer of Code in 2014 and 2015, to me getting a college scholarship from Google in 2013, to our collaboration with GNUnet, I2P, and Tor to try to register the .bit TLD as a special-use name via IETF. Those same market manipulators will, I assume, use this the same way, probably with the same minimal level of success that they had previously.

If I had any interest in spending my time worrying about market manipulators, I’d be in a different line of work, making way more money than I’m making right now. The best I can do is be transparent about this, so that it’s obvious to anyone who does an ounce of research that nothing nefarious occurred. Transparency FTW.

Will you publicly post your presentation slides?

Sure, why not?

Namecoin’s Jeremy Rand will be a speaker at QCon London 2017

Mar 3, 2017 • Jeremy Rand

As a result of an invitation from Riccardo Spagni (AKA fluffypony of Monero), I will be speaking at the “Practical Cryptography & Blockchains: Beyond the Hype” track at QCon London 2017 (March 6-8 2017). My talk is entitled “Case Study: Alternate Blockchains”. I will also be on a panel discussion alongside Paul Sztorc, David Vorick, Elaine Ou, Peter Todd, and Riccardo Spagni.

My understanding is that a video of my talk will be published by QCon. Assuming that that’s correct, I will post a link here when it’s available.

Huge thanks to Riccardo for inviting me, and to the QCon organizers for putting on the conference and covering my travel expenses. Looking forward to it!

Lightweight SPV Lookups: Initial Beta

Oct 23, 2016 • Jeremy Rand

If you watched my lightning talk at Decentralized Web Summit 2016 (and if you didn’t, shame on you – go watch it right now along with the other talks!), you’ll remember that I announced SPV name lookups were working. I’m happy to announce that that code is now published in preliminary form on GitHub, and binaries are available for testing.

You can download it at the Beta Downloads page. Once installed, it’s basically a drop-in replacement for Namecoin Core for any application that does name lookups (such as ncdns). Test reports are greatly appreciated so that we can do a proper release sooner.

Initial syncup using a residential clearnet cable modem connection takes between 5 minutes and 10 minutes, depending on the settings. (It is probably feasible to improve this.) Lookup latency for name_show varies from 2 seconds to 4 milliseconds, depending on the settings. (It is also probably feasible to improve this.)

This work wouldn’t have been possible without the work of some very awesome people whom I need to thank.

First, I need to thank Ross Nicoll from Dogecoin (warning: non-TLS link) for creating libdohj, an altcoin abstraction library that has prevented Namecoin from needing to maintain a fork of BitcoinJ. We’re using the same AuxPoW implementation from libdohj that Dogecoin is using – a fitting repayment, since Dogecoin Core uses the same AuxPoW implementation that Daniel Kraft wrote for Namecoin Core. We look forward to continuing to work with Ross and the other excellent people at Dogecoin on areas of shared interest.

Second, I need to thank Sean Gilligan for his work on bitcoinj-addons, a collection of tools that includes a JSON-RPC server implemented using BitcoinJ, which can substitute for Bitcoin Core. Sean is also a big Namecoin enthusiast. (I also finally got to meet Sean in person at DWS.)

Last but not least, I need to thank Marius Hanne, operator of the webbtc.com block explorer. The SPV lookup client currently is capable of using webbtc.com for extra efficiency (either for checking the height of blocks to download over P2P, or for downloading merkle proofs). Marius has been incredibly helpful at customizing the webbtc.com API for this purpose. webbtc.com is under a free software license (AGPLv3), so you can run your own instance if you like.

Remember: this is a beta, for testing purposes only. Don’t use this for situations where incorrect name responses could lead to results that you aren’t willing to accept.

In addition, some notes about security. SPV protects you from being served expired name data, and protects you from being served unexpired name data that isn’t part of the longest chain. However, the SPV modes other than leveldbtxcache (see the documentation) don’t protect you from being served outdated name data that hasn’t yet expired, nor does it protect you from being served false nonexistence responses, nor does it protect you from someone logging which names you look up. We made an intentional design decision to trust webbtc.com here, rather than the Namecoin P2P network, because the P2P network is unauthenticated, trivially easy to wiretap, and trivially easy to Sybil. leveldbtxcache mode avoids these isues, although it takes about twice as long to synchronize. We have plans to add further improvements in these areas as well. SPV also doesn’t protect you from attackers with a large amount of hashpower. As with Bitcoin, a major reason that miners can’t easily attack end users is because there are enough full nodes on the network to keep the miners honest. If you have the ability to run Namecoin Core (syncup time of a few hours, and a few GB of storage), you should do so – you’ll have better security for yourself, and you’ll be improving the security of other users who can’t run a full node.

Have fun testing!

Decentralized Web Summit Recap

Jul 3, 2016 • Jeremy Rand

As was mentioned on the forum and /r/Namecoin, I represented Namecoin at the Decentralized Web Summit at the Internet Archive in San Francisco, June 6 - June 10. Lots of awesomeness occurred.

I participated in a panel on naming and identity systems on Wednesday. Other panelists were Christopher Allen (Blockstream), Muneeb Ali (Blockstack), and Joachim Lohkamp (Jolocom); Chelsea Barabas (MIT Center for Civic Media) moderated. The panel had a diverse set of perspectives, and I think the discussion was informative.

On Thursday, I did a lightning talk. The talk briefly introduced Namecoin, and then went on to new developments, specifically new announcements about HTTPS and SPV. The lightning talk concluded with an invitation to talk to us about collaboration, and a plug for my workshop (which immediately followed).

The workshop was basically an intro to actually using Namecoin. I walked the attendees through registering domain names and identities, viewing domain names with ncdns, and logging into websites with NameID. We had some minor technical issues during the workshop (which is to be expected), but nothing too bad. At the end of the workshop, I showed a demo of the TLS code working. (Major thanks go out to fellow Namecoin developers Brandon Roberts, Jonas Östman, Joseph Bisch, and Cassini for helping me put together the workshop.)

But of course, I didn’t fly to San Francisco just to do a panel, lightning talk, and workshop. A major goal was to talk to as many other projects as possible to see where we could collaborate. (No single project is going to decentralize the entire Web, but working together, we might have a shot.) I won’t list all the conversations I had on this post, because I want people to be able to talk freely to me at conferences without being worried that the conversation will be posted for the world to see, but the number of orgs I talked to stands at at least 23. Hopefully we’ll be able to announce some results of these conversations in the near future.

And of course, it wouldn’t be an event by the Internet Archive without archived videos, so here are some of the highlights that Namecoiners will find particularly interesting:

I also want to thank Brewster Kahle and Wendy Hanamura for organizing the summit, and Kyle Drake of Neocities, Greg Slepak of okTurtles, and John Light of Bitseed for inviting me to attend. Also thanks to all the other organizers, speakers, and attendees: you’re all awesome. I really hope that Internet Archive makes this a regular event.

NMControl 0.8.1

UPNP Vulnerability in Bitcoin Core affects Namecoin

A vulnerability was found in Bitcoin Core. It allows an attack from malicious peers in the local network via UPNP. Namecoin is affected, too, so everybody should turn off UPNP until further notice.

OneName’s Blockstore is much less secure than Namecoin

Sep 29, 2015 • Namecoin Developers

Note: the naming system described in this post, Blockstore, is now named Blockstack.

It’s well-known that OneName has been engaged in questionable practices on the Namecoin blockchain, such as creating a duplicate namespace (u/) which harms the security of users and aids squatters. So we were interested to hear the news on September 12 that OneName has decided to stop using Namecoin in favor of a naming system they announced circa February 2015, Blockstore. Blockstore stores name transactions within OP_RETURN outputs on the Bitcoin blockchain (the full values of names are stored in a DHT). OneName claims that Blockstore has better security than Namecoin, on the grounds that Bitcoin hashpower is larger and more decentralized than Namecoin’s. This is a false assumption.

It is fairly well-known that a blockchain is a dynamic-membership multiparty signature (DMMS), attesting to the correctness of the transactions. Specifically, there are two main types of correctness that the blockchain attests to: the ordering of the transactions, and the validity of the transactions with regard to all previous transactions. Correctness of ordering is primarily to prevent double-spending of otherwise-valid coins. Transaction validity, such as checking ECDSA signatures and making sure that names are being spent by their rightful owner, allows SPV-based clients to function properly (among other things). OneName is claiming that Bitcoin’s blockchain is a DMMS attesting to Blockstore’s correctness. But the Bitcoin miners are only verifying ordering and Bitcoin transaction validity – they are not verifying the validity of Blockstore transactions.

Is this a big deal? Yes. It means that if you wrote an SPV client for Blockstore, I would be able to create an OP_RETURN output in the Bitcoin blockchain that claimed ownership of any arbitrary name, and your SPV client would not be able to detect the fraud. It also means that a validating Blockstore node cannot be a UTXO-pruned Bitcoin client – a full Bitcoin client is needed.

To expand further on this issue, in the event of malicious data insertion by users bent on disrupting the system, clients that are asking for data from Blockstore servers have almost no way to verify whether the servers are telling the truth or not.

In Bitcoin, the notion of SPV clients was proposed by Satoshi as a way for lightweight clients to avoid the need to store the entire Bitcoin blockchain themselves. As part of this idea, Satoshi described a mechanism by which fraud proofs could be used to reveal cheating and, ostensibly, punish the cheaters. Bitcoin SPV is in large part enabled by the proof-of-work in Bitcoin which makes forging SPV responses very difficult to begin with. In Blockstore’s system, lightweight lookups have no PoW-backing, and there does not appear to be a way to have even simple Satoshi-style PoW-based fraud proofs to detect cheating, so clients are likely entirely at the mercy of anyone running a server. Blockstore-specific SPV fraud proofs would require complex mechanisms and probably extensive pathing hints, and since fraud proofs don’t even exist yet in Bitcoin (wherein the majority of blockchain research and development is happening and on which the attention of nearly all of the interested cryptographers, research scientists, developers, and engineers is focused) it is unlikely Blockstore can satisfactorily solve this particular problem given the more extreme constraints they are operating with.

The Blockstore system is analogous to recommending that currency be stored in a blockchain that has a higher hashrate than Bitcoin and has 100 equally sized pools, but whose block validation rules are so loose that signature verifications always pass. What’s the point when I can steal your money or names and your client can’t tell? (Of course, when all signature verifications pass, even double-spend detection can’t function either, so this doesn’t even solve double-spends.)

Namecoin, in contrast, enforces transaction validity (including name ownership) in blockchain validation rules. This means that, even if a majority of Namecoin hashpower wanted to steal your name, their transaction that spends your name coin would be rejected by all of the network’s validating nodes. They could double-spend a name, but the only attack that this enables is fraud during non-atomic name trades. Given that Namecoin supports atomic trading of a name for currency (see ANTPY and NameTrade), this attack is unlikely to be an issue.

A prolonged 51% attack on Namecoin could censor name transactions and eventually cause names to expire if the attack lasted 8 months, but this would be detectable and obvious to all users on the Namecoin P2P network. (Moreover, as soon as the attack started, it would be detectable and obvious specifically which names were targeted, and if the names were re-registered by the 51% attacker, they would be distinguishable from the original name by the block height of their corresponding name_firstupdate transaction. We have a pending proposal to make that block height visible to user applications so that such attacks can be automatically mitigated.) And of course, a mining pool who performed an 8-month-long 51% attack would most likely lose a lot of hashpower as their users abandoned them in favor of mining pools who aren’t attacking the network.

Is Namecoin’s mining centralization problematic? Yes, to a point. But with Namecoin there is a possibility for the situation to improve in the foreseeable future with the new Namecoin Core client, and with influx of new users and miners due to improved usability. While Namecoin currently has around 40 percent of Bitcoin’s hashpower verifying transaction validity, Blockstore has 0.

Additionally, DHT-based discovery of storage nodes is one of the classic suggestions of new users as an alternative to DNS seeds, and, originally, IRC-based discovery: it has never been committed because it is trivial to attack DHT-based networks, and partly because once a node is connected, Bitcoin (and thus Namecoin) peer nodes are solicitous with peer-sharing.

As an actual data store, DHT as it is classically described runs into issues with non-global or non-contiguous storage, with little to no way to verify the completeness of the data stored therein. With the decoupled headers in OP_RETURN-using transactions in Bitcoin and the data storage in a DHT (or DHT-like) separate network, there is the likelihood of some little-used data simply disappearing entirely from the network. There is no indication of how Blockstore intends to handle this highly-likely failure condition.

More interesting are Bitcoin-related failure conditions: Blockstore information is not in an obscured Merklized tree-like structure with the root node stored in Bitcoin; Blockstore proposes to store actual raw data in OP_RETURN, with the limited space therein necessarily requiring highly constrained bits of data and limits in its form and structure. With this, it would be trivial for anti-spam mining cabals to censor these transactions, and given the understandable hostility with which some Bitcoin advocates treat altcoin bloating of the Bitcoin blockchain, the inevitable result will be a mass-distributed method of censoring of Blockstore transactions. Even if this is not so, it is a systemic flaw and therefore at permanent risk of being disrupted by miners who choose not to enable what they in turn view as the hostile economic externalities involved with permanent storage of questionably valuable data on behalf of third parties. As Satoshi noted: “Piling every proof-of-work quorum system in the world into one dataset doesn’t scale.”

Finally, one of the biggest flaws in this scheme is the nature of the loosely-connected graph structure formed by the so-called “consensus hash” that Blockstore hand-waves away by saying it will hash the data in the “last 12 blocks” without a concrete description thereof. What algorithms do they propose to ensure that not only does it form a properly, fully-connected graph and is verified, but also that its consensus-hash is accurate or even useful? Graph algorithms are notoriously complex: entire branches of science exist just to study and theorize about them. What use is this consensus hash when it’s entirely voluntary and arbitrarily-connected to prior data? How is the verification process going to proceed without, over a long period of time, becoming so complex that the cost is no longer feasible for normal consumer processors? How does Blockstore intend to discover what data each consensus hash even refers to? Transitive closure searches for such graphs could be made arbitrarily complex and non-deterministic by attackers who are interested in doing so, especially over time, and there is no mention of this class of vulnerability nor its proposed solution by Blockstore.

We will be working on improving Namecoin identities to make pseudo-decentralized naming systems obsolete. But unlike the commercial enterprise OneName, we are volunteers creating ethical software in our spare time. If you prefer our method of prioritizing sound design over flashy PR campaigns, join us!

Why Duplicate Namespaces Are Bad for Users

Sep 1, 2015 • Namecoin Developers

Every now and then, we’re approached by users who are asking about namespaces which duplicate functionality of the officially recommended namespaces. Examples of these namespaces include tor/, i2p/, and u/. We think these duplicate namespaces are harmful to end users. This blog post will try to explain why duplicating functionality of existing namespaces is almost always a bad idea.

Background

This whole thing seems to have started when someone noticed that the d/ specification, which is used for domain names, only supported IP addresses. This person thought it would be awesome to have DNS for Tor hidden services, and proposed a tor/ namespace for such domains. Someone else appears to have decided the same thing for I2P, and created an i2p/ proposal.

An alternative proposal was the d/ 2.0 spec, which allowed d/ names to map to Tor hidden services and I2P eepsites. This is the proposal which is for the most part still widely used today.

Why was d/ 2.0 a better idea than starting 2 new namespaces?

Duplicate Namespaces Attract Squatters

Imagine that d/, tor/, and i2p/ are all common namespaces. Let’s say that WikiLeaks is running an IPv4-based website using the name d/wikileaks. Now let’s say that WikiLeaks wishes to offer a Tor hidden service for their website. They now have to register a second name, tor/wikileaks. Uh oh, seems a squatter saw that WikiLeaks had d/wikileaks and preemptively grabbed tor/wikileaks and i2p/wikileaks just in case WikiLeaks wanted those names. Now WikiLeaks has to negotiate with a squatter, who potentially might be on the payroll of a government who wants to either cost WikiLeaks lots of money or impersonate it.

What’s the defense here? The only defense is to preemptively register your name in all namespaces, just in case you want them later.

Let’s say that in an alternate reality, WikiLeaks did exactly this, registering d/wikileaks, tor/wikileaks, and i2p/wikileaks. Now let’s imagine that someone decides that CJDNS mesh networking would be nice to have with Namecoin’s domain names, and creates a cjdns/ namespace. Well crap, now WikiLeaks is in a race with squatters to see who can grab cjdns/wikileaks first; if WikiLeaks fails, then they can’t ever use their name with CJDNS.

Remember that the domain squatters probably expend much more time watching namespace proposals, since that’s basically their job. WikiLeaks, on the other hand, probably is more worried about their whistleblower anonymity infrastructure than their choice of Namecoin namespaces.

This is simply a losing game for everyone except squatters.

Duplicate Namespaces Break Users’ Security Assumptions

Imagine a situation where an end user hears from a trusted source that wikileaks.bit is a great place to submit documentation of government corruption. wikileaks.bit uses the d/ namespace. But the end user doesn’t want to use clearnet, so she uses the tor/ namespace instead, accessing the domain wikileaks.tor. Note that she has just made the assumption that d/wikileaks and tor/wikileaks are operated by the same entity. However, this is not the case. The Namecoin network considers those two names to be entirely different things. Imagine that tor/wikileaks is instead operated by a government agency (perhaps they bought it from a squatter). Now our whistleblower is screwed.

If duplicate namespaces are in use, the only way to verify that two names are controlled by the same entity is to inspect the name you trust, and see if it links to the other name. For example, you could visit the website associated with d/wikileaks, and see if it mentions tor/wikileaks. This, of course, is neither automatable nor secure in our whistleblower’s case. Given how popular and successful phishing is, providing scammers with an easy way to break users’ security assumptions is an extremely bad enginering idea.

Consolidate All Namespaces with Common Use Cases

Let’s look instead at the d/ 2.0 proposal. It recognized that IP-based DNS, Tor-based DNS, and I2P-based DNS are all aiming at a common use case. By creating new functionality as new fields of the d/ namespace rather than new namespaces, the proposal makes sure that users of Namecoin DNS will be able to freely switch between IP, Tor, I2P, and any future system which may come along, without ever needing to register a new name. This protects users from squatters and protects their security, both now and in the future. (It also saves them money on name fees.)

Your Options When a Namespace Spec Doesn’t Meet Your Needs

Let’s say that you want to implement a new feature into an existing use case in Namecoin. For example, maybe you wish the id/ spec had some extra fields. You have three options:

Talk to the other Namecoin developers and propose a change to the spec. This is what we generally recommend, although we realize that in some cases this may be difficult if your project is confidential prior to launch.

Add a custom field to an existing namespace spec. As long as your new field has a name which isn’t likely to collide with other use cases, this is pretty much harmless (although we’d love it if you talked to us). Even if you’re refactoring existing features, it is likely that software following the existing spec will ignore your extra fields, and you can ignore the preexisting spec’s fields. This means that the two schemes can co-exist pretty much peacefully.

Make a new namespace for your specific fields, and disregard the existing namespace. This is harmful; there is almost never a good reason to do this.

The u/ Namespace: Exactly How Not to Approach the Problem

As far as we can tell, a for-profit company (whom we won’t name here) decided to start offering identity products using the Namecoin blockchain. They didn’t like the structure of the id/ spec. So, did they talk to us? Nope, we never heard of their product until after it launched (we first noticed their product because of a bunch of nonstandard names showing up in the blockchain). Did they add some custom fields to id/? Nope, they instead did the worst possible thing as listed above: they made their own namespace (u/) with identical use cases as the pre-existing id/.

The Squatting Argument Revisited

While we’ve had difficulty reaching the team behind u/ for an official statement, we are under the impression that one of their stated rationales for creating a new namespace was that id/ was allegedly squatted too much. This is a quite strange claim. Creating a new namespace every time an existing namespace is heavily squatted will disrupt legitimate users of the existing namespace much more than it will disrupt squatters, even if a lot of the existing names are squatted. It also is an inherently cyclic process, since squatters can grab a new namespace just as fast as the old namespace. The correct way to disincentivise squatting (to the extent that it is a problem) is with changes to Namecoin’s fee structure.

A “More Structured” Spec

Another commonly cited “advantage” of u/ is that its JSON structure is allegedly better organized than that of id/. This is not an adequate reason to start a new namespace. Even if the u/ team didn’t want to directly work with us, they could have simply used their incompatible spec with the id/ namespace, and everything would have worked just fine. End users who wanted to use both specs would have been subject to the slight annoyance of having to enter duplicate fields in their existing names, but this would be infinitely better than needing a new name.

“Decentralization”

Several people have commented that because Namecoin is decentralized, or because it’s a general-purpose naming system, that it’s completely acceptable to create new namespaces without regard for the consequences. There is a fundamental difference between centralization and interoperability. Centralization would mean that an authority is dictating which names can and cannot be created; this would be extremely bad for end users. Interoperability means that the developer community is voluntarily standardizing on practices that allow everything to work together with minimal disruption; this directly benefits end users. We are for interoperability, not centralization. The ability to create new namespaces is primarily available for creating new applications. Of course, the blockchain validation math doesn’t care if two namespaces are used for similar things. This doesn’t mean it’s a good idea, or that people should be doing it on a regular basis.

Where From Here?

The u/ team has gotten themselves into a potentially bad situation, since now if they wish to move to id/, they’ll have to race against squatters: precisely the reason why we don’t recommend changing namespaces in the first place. We have limited sympathy for this predicament, since this could have easily been averted had they not created u/ in the first place. This has resulted in unnecessary trouble for end users, and we hope that a good resolution can be found.

Fix for OpenSSL Consensus Vulnerability has been deployed on 100% of mining hashpower

Aug 13, 2015 • Namecoin Developers

Fix for OpenSSL Consensus Vulnerability has been deployed on 100% of mining hashpower. Users of NamecoinQ (i.e. namecoind/Namecoin-Qt 0.3.x) are on semi-SPV security, and should wait for at least 6 confirmations for incoming transactions. Users of Namecoin Core (in beta) are on full-node security. Thanks to the miners for their quick action and everyone else who assisted in the response.

N-O-D-E Interviews Daniel Kraft

Name-Only Clients

Feb 1, 2015 • Namecoin Developers

Currently, the only options for people wanting to resolve Namecoin names are to deploy a full node or to trust results provided by someone else. It doesn’t have to be this way, though. Like with Bitcoin lightweight SPV clients which have much smaller local storage requirements can be created for Namecoin. Unlike Bitcoin, we can have “Namecoin Name-Only Clients” that give up support for “Namecoin as a currency” in exchange for better security and even lighter storage requirements.

Without the need to support currency transactions, a Namecoin SPV client can delete block headers that are more than 36,000 blocks old because they are not required to validate “current” (unexpired) name transactions. This data can be stored in a few dozen megabytes. We call this mode “SPV-Current” (SPV-C).

To support SPV-C, a name-only client must be able to query full nodes for transactions by name. This capability can either be added to the Namecoin “wire protocol” or operate as a separate service. Responses include cryptographic proof (using a Merkle hash tree) that the transaction was included in a block which the SPV-C client has headers for.

This basic implementation of SPV-C is vulnerable to a malicious node responding with an older (but non-expired) name transaction in response to a query or censoring the result. The easiest way to combat this is to query multiple full nodes and use the most recent response, but this is fragile. A more robust option would be to use a cryptographic (Merkle) hash tree to represent the list of all “unspent” transactions. This can be used to prove that it is responding with the latest data, since updating a name is done by “spending” the previous update.

The SPV-C client only needs to have the “root” of the hash tree (which is only 32 bytes long) to verify this proof. By having miners include this root hash in the coinbase transaction of every block, the set of all current names is verified continuously throughout the mining process. Since name operations are marked spent when they expire, the fact of a name transaction is unspent is proof that it is current. We call this Unspent Transaction Output Tree/Coinbase Commitments (UTXO-CBC).

Once the UTXO-CBC is included in all blocks, name-only clients can then keep track of the current root hash (more accurately, the root hash of the 12th deepest block, as keeping the UTXO set’s current at all blocks in memory is infeasible). When a client queries for a transaction by name, a full node will respond with the name transaction, proof that the transaction was included in a particular block, and that the name data hasn’t been updated by “spending” the transaction. UTXO-CBC constitutes a significant security improvement for SPV-C clients, as it prevents full nodes from returning anything but current name data. This mode is referred to as ‘SPV-C+UTXO-CBC’.

Implementing UTXO-CBC is complicated by a number of problems relating to the formulation of the cryptographic data structure representing the UTXO set. The most fundamental of these issues is that since new transactions occur continuously, it is important that miners be able to efficiently compute updates to the UTXO set. In other words, it must not be necessary for the entire cryptographic data structure to be recomputed when a new transaction is added to the set. Of course, any solution must preserve the ability to concisely represent the set and the ability to generate small, efficient proofs of inclusion. Thankfully, a similar UTXO coinbase attestation proposal has been made for Bitcoin, so existing work (such as the Authenticated Prefix Trees proposal) will be transferable to Namecoin.

The final security issue is ensuring that full nodes cannot censor names by simply lying “that name does not exist”. A very similar problem was solved in DNSSEC to provide “authenticated denial of existence” using a sorted list of all names. It’s possible to do this with the same sort of hash tree used to represent all unspent transactions. This commitment is referred to as Unspent Name Output Non-Existence CBC (UNO NX CBC). The UNO set is a name-only subset of the UTXO set.

Name-only clients can be improved incrementally, as attacks on even the basic SPV-C design require an attacker to control 100% of the full nodes connected to the target and even then it cannot arbitrarily forge data. Bootstrapping nodes can also attempt to provide a diverse selection of full nodes to clients, increasing the attack cost. However, UTXO CBC and UNO NX CBC reduce the amount of network traffic, reduce latency, and increase security, so they should be implemented eventually.

Hopefully this post has illustrated how both secure and lightweight resolution of Namecoin is not only possible but practical. Once implemented, name-only clients will provide a much better option to those who cannot run a full node versus all other currently available options which either provide no real security and/or require complete trust of a third party.

This post was slightly edited from its original version upon import to Jekyll, to more accurately reflect consensus among the Namecoin development community.

Developer Meeting Saturday, Jan. 3

Dec 30, 2014 • Namecoin Developers

Lately we’ve had a few new people who are interested in joining development. This is, of course, awesome. Unfortunately, helping them get started has been kind of ad-hoc, because there’s not a great way to have a low-latency conversation with current developers to bounce ideas off of. None of the current developers are in IRC at predictable, consistent times, and some developers are rarely on IRC at all. Forums have too much latency and overhead for quick back-and-forth discussions. This is a non-ideal situation.

So, we’re going to do an experiment – there will be a public developer IRC meeting in #namecoin-dev on Freenode on Saturday, Jan. 3, 9AM Pacific / 11AM Central / 12PM Eastern / 5PM UK / 6PM CET. We’re going to try to have as many Namecoin developers present as possible.

Current developers will give updates on what they’ve been doing; interested new developers can use it as an opportunity to get involved in discussions, ask questions, etc.; and interested average users will get to see more of how the development process works (because transparency is good). This bears some similarity to how Tor handles things.

Remember, this is an experiment, so we’re not quite sure how it will go. If it goes well, this may become a regular occurrence.

So, if you’ve been thinking of getting involved, or if you’re just curious, now’s your chance – stop by the developer meeting on Jan. 3 and talk to us. See you there!

Early Christmas Present for Namecoin

Dec 22, 2014 • Namecoin Developers

Christmas has come early for Namecoin: Discus Fish (aka F2Pool) has donated 20,000 namecoins to fund a reimplementation based on mainline Bitcoin!

What’s more is that chief Namecoin scientist Daniel Kraft has been working so hard that the code is already usable! Indeed, Discus Fish has been using it in production for a few weeks. The code is still considered experimental but you can check it out on Daniel’s Github repo. As always, make encrypted backups before playing around.

TL;DR: Namecoin is back!

This post was slightly edited from its original version upon import to Jekyll, to more accurately reflect consensus among the Namecoin development community.

New Version 0.3.80, Softfork Upcoming

Dec 19, 2014 • Namecoin Developers

This release smooths the transition to a new Namecoin codebase rebased on the latest version of Bitcoin!

Softfork

Starting at block height 212500 (around New Year’s Eve) miners will begin applying some checks for edge cases. There is a a small chance that miners running older versions will fork, so all pool operators and solo miners must update. Regular users should update too but the traditional precaution of waiting for six confirmations should be safe as well.

Be safe and make an encrypted backup of your wallet.dat file before updating. Thanks to everybody who helped make this happen!

Happy Halving Day!

Cakes for Competitors

Dec 6, 2014 • Namecoin Developers

The proliferation of cryptocurrencies working on decentralized TLDs has spurred a narrative of competition with Namecoin. However, the distance between our communities is small, and the competition is friendly. The free software movement has shown us that multiple approaches strengthen the ecosystem. Competition is not the zero-sum game fundamentalist Chicago-school economists make it out to be.

The title of this post is a reference to the Internet Explorer and Firefox tradition of sending celebratory cakes for major releases. While the two teams are competitors, this competition has pushed them to create better products. We believe the same is true of cryptocurrencies.

Namecoin developers are in this game to win it, but “winning” is the creation of decentralized, secure, and usable naming systems for the internet. Gaining acceptance with the IETF, building more secure registrars, lightweight clients, usable browser integration, and dynamic domain pricing are just a few challenges facing cryptocurrencies looking to add a decentralized TLD.

From proof-of-work to funding models and governance styles, Namecoin is very different from BitShares, Ethereum, or CounterParty. Sticking to our academic background allows us to build more robust security models and better defend ourselves from political and legal attacks. However, it also means we have to apply for public funding and struggle to build consensus. We believe that this diversity ultimately strengthens the community as a whole.

The bottom line is that we wouldn’t be working on Namecoin if the technology and solutions we are building were not reusable. BitShares has already benefited greatly from our work, borrowing our namespace specifications, adopting our protocols, and more. Building a usable censorship resistant web will require significant blood, sweat, and tears from lots of smart people. We are happy to do our part, and we are looking forward to the contributions that comes from competition and cooperation with the other cryptos.

This post was slightly edited from its original version upon import to Jekyll, to more accurately reflect consensus among the Namecoin development community.

Official Social Media Accounts

Oct 3, 2014 • Namecoin Developers

A large number of unofficial social media accounts have been making false claims about Namecoin. While we wish to encourage a healthy community and decentralization, we take user safety very seriously. Content posted through these accounts may pose a threat to average users, and the Namecoin development team has established official social media accounts in an attempt to help protect users.

The most serious issue, by far, has been claims about Namecoin being anonymous. We know that if a product is advertised as anonymous, it will be used in high-risk situations. If the product turns out to not actually be anonymous it can end up killing its users.

There are a large number of unofficial accounts [1] that have actively posted claims about anonymity to Facebook and Twitter groups related to WikiLeaks, Anonymous, Whistleblowing, Occupy, and Tor. The Namecoin client has not been audited (even a little bit) for security over Tor, and a recent proxy leak issue was fixed in Namecoin-Qt after these postings occurred. If someone living under an oppressive regime had tried to use our software in conjunction with Tor, that proxy leak could have gotten them arrested, tortured, or killed as a result.

Namecoin transactions, including domain name purchases, are not anonymous. Namecoin, being based on the Nakamoto blockchain used in Bitcoin, inherently leaks large amounts of information about its users’ identity. In one early example, researchers Reid and Harrigan were able to trace stolen Bitcoins to a bank account where the funds were deposited, despite the thief using sophisticated attempts to anonymize the funds. In a later example, researchers Meiklejohn et. al successfully identified the cluster of most WikiLeaks donations, even for donors who utilized single-use addresses. As the graph theory and data mining experts take over from the cryptography experts, and as the financial incentive for targeted advertising increases, these attacks will only get more effective.

We have asked the owner(s) of these accounts to stop posting false assertions about Namecoin on multiple occasions. They have consistently disregarded our requests, sometimes claiming that it’s “just marketing” and that we “are not very good marketers.” After two of our developers criticized this behavior on the forum, they were banned from one Facebook group for “trying to make trouble”.

This reckless “marketing trumps accuracy” attitude is dangerous for Namecoin’s users and harmful to Namecoin’s and our professional reputations. We recognize the importance of anonymity and are working towards a client specifically designed to be used with Tor as well as adopting methods from ZeroCash for anonymously purchasing records (such as ‘.bit’ domains and ‘id/’ identities) on Namecoin [2].

At Namecoin, we try hard to avoid unnecessary drama and focus on development. We really wish we could trust these unofficial accounts to represent us on social media platforms, but they have unapologetically put Namecoin users at risk. We would like to welcome everyone to migrate from the unofficial accounts to the official ones listed below. We will delegate posting rights to community members that demonstrate competency and a respect for user safety.

[Jeremy edit 2017 05 10: social media accounts are now listed in the footer of every page on Namecoin.org.]

Stay safe,

- The Namecoin Development Team

[1] We won’t be linking to these accounts, since we don’t want to boost their PageRank. However, they are unfortunately very easy to find on the various social networks.

[2] Ironically, the owner of the social media accounts has cited development discussions regarding these features as evidence that his claims of anonymity are correct.

Namecoin & Retail Usage

Sep 21, 2014 • Namecoin Developers

Well-meaning investors have been pushing for various currency-related use cases for Namecoin such as ATMs, acceptance at retailers, etc. The core development team sees these efforts as benign but generally do not expend time nor funds on them. Why would developers of a currency not place a priority on currency related use-cases?

The answer is that we place a priority on the specific use-cases that differentiate Namecoin from Bitcoin: DNS, ID, etc. Namecoin’s value comes from its utility as a naming system whereas Bitcoin’s value is primarily derived from its use as a currency. Competing with Bitcoin for dominance as a currency of exchange is suicide: Bitcoin has network effects that will overpower every other competitor in its market.

Furthermore, it is not clear that generic retail use of Namecoin is beneficial: most retailers immediately exchange crypto currencies into a reserve currency. Spending Namecoin at a retailer is only a step removed from exchanging Namecoin for fiat directly. Use as a reserve currency is Bitcoin’s problem, we have our own troubles to worry about.

The Great Aggregating Postmortem

Sep 9, 2014 • Namecoin Developers

In the past couple of weeks, Namecoin suffered outages. Someone (whom we’ve nicknamed “The Aggregator”) tried consolidating a large quantity of “loose change” into a single address. When done correctly, this is good because it reduces the amount of data lite clients [Jeremy edit 2017 05 10: this means UTXO-pruned clients] need to keep. Unfortunately, a combination of volume and an oversight in fee policies led to miners getting knocked offline.

Within 48/hours, we released a patch fixing the low fees which should have stopped the transactions, but miners were slow to uptake the patch. We then increased performance and miners have since finished processing the remaining transactions.

Our initial response was slow because our core developers were traveling. However, we are taking further steps to mitigate similar problems, improve our response times, and improve communications with miners.

Technical deep dive after the jump.

Deep Dive

An unknown party (The Aggregator) began consolidating a massive volume (50,000-100,000K) of unspent outputs into a single address. However, namecoind has extreme difficulty selecting outputs to spend in a wallet containing tens of thousands of unspent outputs. The Aggregator appears to have attempted to address this by writing a script which manually built transactions spending 50 or 100 of those unspent outputs to a new address (N8h1WYaCpnZrN72Mnassymhdrm7tT6q5yL).

The problem is that each of these transactions were 17-30 KB and rather than the standard transaction fee of 0.005NMC per KB (rounded up), a flat fee of 0.005NMC was used. Namecoin, like Bitcoin actually has two fee values, MIN_TX_FEE (used by miners) and MIN_RELAY_TX_FEE (used by all other full nodes). This enables MIN_TX_FEE to be lowered down to MIN_RELAY_TX_FEE without nodes who haven’t yet upgraded refusing to forward transactions with the new lower fee. On Bitcoin, the ratio between MIN_TX_FEE and MIN_RELAY_TX_FEE is 5:1, but due to an oversight it was 50:1 in Namecoin. The result was that The Aggregator’s transactions had a large enough fee to be forwarded throughout the network, but were considered “insufficient fee/free” transactions by miners. Since there is very limited space in blocks for such transactions, they just kept building up in the miners’ memory pools. The volume of transactions soon began causing the merge-mining software run by pools to time-out trying to get work.

We released an initial “band-aid” patch which simply upped the MIN_RELAY_TX_FEE to stop these particular problem transactions from being rebroadcast or accepted by pools. Phelix and Domob put together an RC branch which fixed the performance issues associated with The Aggregator’s transactions. The transactions have all been process but we have had positive feedback from miners and will be merging the RC branch with mainline shortly.

Additional techniques are being discussed to ensure Namecoin can better handle situations where there are too many transactions being broadcast to process them all. One is to temporarily drop lowest-priority transactions in the event of a flood, while granting higher priority to name transactions (NAME_NEW, NAME_FIRSTUPDATE, NAME_UPDATE) in order to ensure that time-sensitive operations are not delayed. Bitcoin developers have planned enhancements which will further mitigate the problem.

It was observed that at one point only one pool was functional. This pool mined many blocks in a row with no transactions. We have identified this pool and have been in contact with the operator. They modified their software some time ago to ignore all transactions to address some performance issues. We do not believe there was any malicious intent on their part, and they have committed to working with us to get to a point where they can include transactions in their blocks.

Response

Downtime is very bad; it opens us up to attacks and messes things up for businesses which depend on Namecoin. Trying to guarantee some arbitrary level of uptime is a fool’s errand but we endeavour to outline what went wrong and what steps we are taking to prevent similar issues from occurring in the future.

Each step of our response was unacceptably slow: discovering a problem, diagnosing the problem, submitting patches, and coordinating with miners all took far longer than they needed to. Our response was primarily slowed by the fact that two of our lead developers (Ryan and Domob) were both away from home with limited or no availability. There were other issues as well, which are addressed in the following recommendations, but this single factor significantly slowed every step of the process. Had they been available, we would have had patches out within 24 hours of the initial problem.

Initial Error Detection

The initial response times lagged mainly because we were unable to diagnose the problem. Individual developers (Ryan in particular) have patched clients which aid in diagnosing abnormalities. Initial detection of the problem lagged as well, individual users and developers noted problems but this took some time to reverberate. Automated monitoring system that will alert us to abnormalities on the network would be useful as well.

Recommendations

Document and share patched clients

Ryan

Anon developer M?

Setup automated monitoring √

Block discovery times. √ Namecha.in has agreed to help.

Mining Pool ratios. √ Namecha.in + Ryan

Testnet-in-a-box. (Domob/Ryan need to publish)

Pushing Fixes

We went nearly 48 hours between having a bandaid patch and pushing out a statically compiled binary. This occurred for the following reasons:

Indolering didn’t know how to properly communicate and distribute the fix.

Limited communications with pool operators: it appears that the miners did not apply the initial bandaid patch which would have fixed network issues immediately.

Even after the correct course of action was identified, it took 3-4 hours to create an official fork, produce a statically compiled binary, and post to the blog. We have been focusing our efforts on automating the build process for Libcoin, it should not be a problem moving forward.

Recommendations

Codify process for creating an emergency branch and binary √

Setup new alerts mailing list and ask pool operators to join √

Gather contact info for major pool operators. √

Setup multi-user access to Twitter.

Streamline build processes.

Jeremy Rand’s progress on Libcoin:

Travis CI/automated builds √

Functional unit testing √

Miner-specific unit tests (Libcoin doesn’t yet support mining).

Readiness & Coordination

In first-aid, the first thing you do is order bystanders to contact emergency services, direct traffic, etc. More developers would obviously help, but, akin to first-aid, emergencies need someone ensuring that progress is being made. Without a proper first responder, you have a lot of bystanders simply standing around.

Recommendations

Fix developer mailing list. √

Create security mailing list. (in progress)

Appointing an official emergency coordinator to ensure things are moving.

Has contact information of developers (including phone numbers) and miners.

Alternate for when emergency contact is unavailable.

Improve volunteer onboarding. (planned for September)

Improve documentation.

Better labeling of repos (deprecate mainline, rebrand Libcoind, etc)

Public development roadmap.

Organize bounties/tickets on needed items.

Thanks

While Domob and Phelix coded the final fix, many people contributed to development. Phelix and Jeremy Rand were omnipresent during the entire response, from raising the initial warning to diagnosing the problem, proposing fixes, and testing. At one point, Jeremy Rand pulled 36 hours with no sleep. Ryan was especially generous with his time during a business trip, pulling all-nighters to help us diagnose the problem, develop fixes, and communicate with the mining pools.

Indolering worked hard to coordinate the response and communicate with miners. Our anonymous developers were a huge help, it’s a shame we can’t thank them publicly.

Namecoin Response to Recent No-IP Seizure

Jul 15, 2014 • Namecoin Developers

On June 30, 2014, customers of the No-IP dynamic DNS service suddenly experienced an outage. According to media reports, the outage was not caused by a technical issue or any kind of negligence by No-IP, but rather by a court order requested by Microsoft, supposedly to fight cybercrime. For background, see No-IP’s official statement.

First off – we only know what was reported by the media. However, we are very concerned that a US judge decided to shut down an immensely popular service provider due to a few abuses by a tiny minority of its users, particularly with no due process afforded to No-IP.

At Namecoin, we believe that decentralization is the best way to prevent malicious takedowns. The only way to be certain that your domain will not be collateral damage to this kind of abusive court action is to make sure that it is mathematically infeasible to take it down without your consent. Bad lawyers can’t change the laws of math.

Namecoin supports dynamic DNS via the .bit top-level domain and the DyName software by Namecoin developer Jeremy Rand. DyName has existed since October 2013, and both Jeremy and Namecoin developer Daniel Kraft have invested development effort to improve the security of this use case. Dynamic DNS was tricky to implement securely with Namecoin, because updates have to be done automatically on a machine with network access, and if the wallet (which must be unencrypted to be used automatically) is compromised, control of the name could permanently be transferred to the attacker. We circumvented this by using a Namecoin specification feature called the “import” field. Under this system, the keys necessary to transfer control of your domain, or to impersonate the server it points to, remain on an encrypted wallet. If the keys used to update your IP are stolen, the worst the attacker can do is make your website go down until you notice and change keys (your visitors will receive a TLS error upon visiting your website in the meantime). Our software for accessing .bit domains, NMControl (which is used by FreeSpeechMe), supports this feature.

Namecoin also supports simulated dynamic DNS via Tor and I2P services. If you configure a .onion or .b32.i2p domain using Tor or I2P, you can give this domain a human-memorable name using Namecoin. The Namecoin software FreeSpeechMe has supported this for HTTP websites since December 2013. While bandwidth and latency are worse than standard IPv4, this method does not require any port forwarding or IPv6 connectivity (DyName requires one of these).

The system is not yet perfect. DyName DNS updates take approximately 10 minutes to propagate the network, and up to 30 minutes after that to be refreshed in the NMControl cache. The Tor and I2P service support does not yet support HTTPS and non-HTTP connections. Due to build issues, we haven’t yet been able to release a FreeSpeechMe update which includes the latest NMControl with support for DyName domains. And the Namecoin-Qt GUI does not yet make it easy to set up DyName or Tor/I2P .bit domains.

We believe that all of the above shortcomings are fixable. However, Namecoin developers are working on this in their free time, and we don’t have a significant financial stake in Namecoin like most altcoin developers do. If you’d like to support decentralized dynamic DNS to help prevent the next takedown, please consider contributing to a bounty for the features you want more attention paid to. And if you’re a developer and you want to help, get in touch with us on the forum.

New Website

Apr 20, 2014 • Namecoin Developers

Thanks to Shobute for designing and Indolering for pushing the new website.