Ben Laurie blathering

30 Jul 2008

Ever since the recent DNS alert people have been testing their DNS servers with variouscute things that measure how many source ports you use, and how “random” they are. Not forgetting the command line versions, of course

But just how GREAT is that, really? Well, we don’t know. Why? Because there isn’t actually a way to test for randomness. Your DNS resolver could be using some easily predicted random number generator like, say, a linear congruential one, as is common in the rand() library function, but DNS-OARC would still say it was GREAT. Believe them when they say it isn’t GREAT, though! Non-randomness we can test for.

So, how do you tell? The only way to know for sure is to review the code (or the silicon, see below). If someone tells you “don’t worry, we did statistical checks and it’s random” then make sure you’re holding on to your wallet – he’ll be selling you a bridge next.

But, you may say, we already know all the major caching resolvers have been patched and use decent randomness, so why is this an issue?

It is an issue because of NAT. If your resolver lives behind NAT (which is probably way more common since this alert, as many people’s reactions [mine included] was to stop using their ISP’s nameservers and stand up their own to resolve directly for them) and the NAT is doing source port translation (quite likely), then you are relying on the NAT gateway to provide your randomness. But random ports are not the best strategy for NAT. They want to avoid re-using ports too soon, so they tend to use an LRU queue instead. Pretty clearly an LRU queue can be probed and manipulated into predictability.

So, if your NAT vendor is telling you not to worry, because the statistics say they are “random”, then I would start worrying a lot: your NAT vendor doesn’t understand the problem. It’s also pretty unhelpful for the various testers out there not to mention this issue, I must say.

Incidentally, I’m curious how much this has impacted the DNS infrastructure in terms of traffic – anyone out there got some statistics?

Oh, and I should say that number of ports and standard deviation are not a GREAT way to test for “randomness”. For example, the sequence 1000, 2000, …, 27000 has 27 ports and a standard deviation of over 7500, which looks pretty GREAT to me. But not very “random”.

27 Jul 2008

The W3C is a pay-to-play cartel that increasingly gets nothing done. Open source developers can’t even participate, as a rule. It also has an IPR policy that’s just as crap as everything else we’re trying not to emulate. So, not a realistic alternative.

The IETF is much better, but its main problem is that it has no IPR policy at all, other than “tell us what you know”. In practice this often works out OK, but there have been some notable instances where the outcome was pretty amazingly ungood, such as RSA’s stranglehold over SSL and TLS for years – a position Certicom are now trying to emulate with ECC, also via the IETF.

A more minor objection to the IETF that I hope the OWF will solve similarly to the ASF is that it is actually too inclusive. Anyone is allowed to join a working group and have as much say as anyone else. This means that any fool with time on their hands can completely derail the process for as long as they feel like. In my view, a functional specification working group should give more weight to those that are actually going to implement the specification and those who have a track record of actually being useful, much as the ASF pays more attention to contributors, committers and members, in that order.

24 Jul 2008

The OWF is an organization modeled after the Apache Software Foundation; we wanted to use a model that has been working and has stood the test of time.

When we started the ASF, we wanted to create the best possible place for open source developers to come and share their work. As time went by, it became apparent that the code wasn’t the only problem – standards were, too. The ASF board (and members, I’m sure) debated the subject several times whilst I was serving on it, and no doubt still does, but we always decided that we should focus on a problem we knew we could solve.

Ever been frustrated that you can’t find out something that ought to be easy to find? Ever been baffled by league tables or ‘performance indicators’? Do you think that better use of public information could improve health, education, justice or society at large?

The UK Government wants to hear your ideas for new products that could improve the way public information is communicated.

19 Jul 2008

A few weeks ago, we invited a group of external security experts to come and spend a week trying to break Caja. As we expected, they did. Quite often. In fact, I believe a team member calculated that they filed a new issue every 5 minutes throughout the week.

The good news, though, was that nothing they found was too hard to fix. Also, their criticism has led to some rethinking about some aspects of our approach which we hope will make the next security review easier as well as Caja more robust.

10 Jul 2008

ACTA, first unveiled after being leaked to the public via Wikileaks, has sometimes been lauded by its supporters as â€œThe Pirate Bay-killer,â€ due to its measures to criminalize the facilitation of copyright infringement on the internet â€“ text arguably written specifically to beat pirate BitTorrent trackers. The accord will place add internet copyright enforcement to international law and force national ISPs to respond to international information requests, and subjects iPods and other electronic devices to ex parte searches at international borders.

IPETEE would first test whether the remote machine is supporting the crypto technology; once thatâ€™s confirmed it would then exchange encryption keys with the machine before transmitting your actual request and sending the video file your way. All data would automatically be unscrambled once it reaches your machine, so there would be no need for your media player or download manager to support any new encryption technologies. And if the remote machine didnâ€™t know how to handle encryption, the whole transfer would fall back to an unencrypted connection.

a) Develop an informational framework document to describe the motivation and goals for having security protocols that support anonymous keying of security associations in general, and IPsec and IKE in particular

3 Jul 2008

It seems like a long time since I spent a very long afternoon (and evening) observing the electronic count of the London Elections. Yesterday, the Open Rights Group released its report on the count. The verdict?

there is insufficient evidence available to allow independent observers to state reliably whether the results declared in the May 2008 elections for the Mayor of London and the London Assembly are an accurate representation of votersâ€™ intentions.

There was lots of nice machinery and pretty screens to watch, but in my view three more things were needed to ensure confidence in the vote.

A display that showed (a random selection of) ballots and the corresponding vote recorded automatically.

No machines connected to the network that could not be observed.

A commitment to the vote (I mean this in the cryptographic sense) after which a manual recount of randomly selected ballot boxes.

The last point is technically tricky to do properly, but I think it could be achieved. For example, take the hash of each ballot box’s count, then form a Merkle tree from those. Publish the root of the tree as the commitment, then after the manual recount, show that the hashes of the (electronic) counts for those boxes (which you would have to reveal anyway to verify the recount) are consistent with the tree.