This code defines the TLSA rrtype number as 52. Secondly, it says there are 3 eight bit fields for Certificate Usage, Selector and Match type. Next, it defines that the rest of the record is the actual certificate (hash). 'conv' methods are supplied for all DNS data types in use.

Wednesday, October 17, 2012

So here's a nice challenge. Let's say you have a list of member email addresses which you get from your account list. But you also have a list of email addresses that you have of your customers, addresses that your customers have used to communicate with you. And finally you have a list of email addresses of potential customers, but who are not yet signed up.

Now let's say someone decides to do a survey of future and existing customers, and sends out the survey to a list that purports to be that. And when the numbers of the survey come in.. mass confusion arises because the numbers don't add up.

So we now have a bunch of files, 'email addresses we mailed the survey to', 'main account email addresses', 'potential customer email addresses' and 'other customer email addresses'. And nobody knows why the last three don't add up to the first list.

Did I mention typos and polluted data? The pain.

So what would normal people do? Well, no idea, but we got on the case using 'diff'. This was tremendously helpful, but only up to a point. It is hard to answer questions like 'give me everybody not on list A but who is on B or C'. This is of course all very easy to do from SQL.

But I wasn't feeling like that. So, because C++ is what I do, I wrote a C++ program. I might be a dinosaur like that, but I feel compelled to point out that the program I present below will scale to literally billions and billions of customers. You know, in case you have more users than there are people on the planet.

So what does this small program do? Using slightly modern C++, it reads lines of text from several files. And once it is done, it will print for each unique line of text in which files it was found. This allows you to quickly see for each email address if it was part of 'main account email addresses', but perhaps not of 'potential customer addresses' etc.

Once it has printed that, it prints out a list of those same lines of text per 'configuration'. So you get groups of lines, and one group might represent 'on the list of addresses we emailed to, but not in any of the other files'. In other words, no clue why we emailed this address.

Another group might represent 'addresses we emailed and only present on the potential customer list'. And finally, and importantly, you might get the group 'we didn't email, but IS actually a customer'.

The output can be read by most spreadsheets, and they'll do the right thing.

All in all, this helped solve a tricky problem, and was actually implemented in less time than it would take to do it by hand. I hope ;-)

Monday, October 8, 2012

This story goes back a long time. For around 10 years now, people have been requesting that PowerDNS learn how to automatically listen on all available IP addresses. And for slightly less than that time, we've been telling people we would not be adding that feature.

For one, if you run a nameserver, you should *know* what IP addresses you listen on! How else could people delegate to you, or rely on you to resolve their queries? Secondly, running services by default on 'all' IP addresses is a security risk. The PowerDNS Recursor for this reason binds to 127.0.0.1 by default.

But still, people wanted this feature, and we didn't do it. Because we knew it'd be hard work. There, the truth is out. But we finally bit the bullet and had to figure out how to do it. This page shares that knowledge, including the fact that the Linux manpages tell you to do the wrong thing.

There are two ways to listen on all addresses, one of which is to enumerate all interfaces, grab all their IP addresses, and bind to all of them. Lots of work, and non-portable work too. We really did not want to do that. You also need to monitor new addresses arriving.

Secondly, just bind to 0.0.0.0 and ::! This works very well for TCP and other connection-oriented protocols, but can fail silently for UDP and other connectionless protocols. How come? When a packet comes in on 0.0.0.0, we don't know which IP address it was sent to. And this is a problem when replying to such a packet - what would the correct source address be? Because we are connectionless (and therefore stateless), the kernel doesn't know what to do.

So it picks the most appropriate address, and that may be the wrong one. There are some heuristics that make some kernels do the right thing more reliably, but there are no guarantees.

When receiving packets on datagram sockets, we usually use recvfrom(2), but this does not provide the missing bit of data: which IP address the packet was actually sent to. There is no recvfromto(). Enter the very powerful recvmsg(2). Recvmsg() allows for the getting of a boatload of parameters per datagram, as requested via setsockopt().

One of the parameters we can request is the original destination IP address of the packet.

IPv6

For IPv6, this is actually standardized in RFC 3542, which tells us to request parameter IPV6_RECVPKTINFO via setsockopt(), which will lead to the delivery of the IPV6_PKTINFO parameter when we use recvmsg(2).

This parameter is sent to us as a struct in6_pktinfo, and its ipi6_addr member contains the original destination IPv6 address of the query.

When replying to a packet from a socket bound to ::, we have the reverse problem: how to specify which *source* address to use. To do so, use sendmsg(2) and specify an IPV6_PKTINFO parameter, which again contains a struct in6_pktinfo.

And we are done!

To get this to work on OSX, please #define __APPLE_USE_RFC_3542, but otherwise this feature is portable across FreeBSD, OSX and Linux. (Please let me know about Windows, I want to make this page as valuable as possible).

IPv4

For IPv4 the situation is more complicated. Linux and the BSDs picked a slightly different way to do things, since they did not have an RFC to guide them. Confusingly, the Linux manpages document this incorrectly (I'll submit a patch to the manpages as soon as everybody agrees that this page describes things correctly).

For BSD, use a setsockopt() called IP_RECVDSTADDR to request the original destination address. This then arrives as an IP_RECVDSTADDR option over recvmsg(), which carries a struct in_addr, which does NOT necessarily have all fields filled out (like for example the destination port number).

For Linux, use the setsockopt() called IP_PKTINFO, which will get you a parameter over recvmsg() called IP_PKTINFO, which carries a struct in_pktinfo, which has a 4 byte IP address hiding in its ipi_addr field.

Conversely, for sending on Linux pass a IP_PKTINFO parameter using sendmsg() and make it contain a struct in_pktinfo.

On FreeBSD, pass the IP_SENDSRCADDR option, and make it contain a struct in_addr, but again note that it probably does not make sense to set the source port in there, as your socket is bound to exactly one port number (even if it covers many IP addresses).

Binding to :: for IPv6 *and* IPv4 purposes

On Linux, one can bind to :: and get packets destined for both IPv6 and IPv4. The good news is that this combines well with the above, and Linux delivers an IPv4 IP_PKTINFO for IPv4 packets, and will also honour the IP_PKTINFO for outgoing IPv4 packets on such a combined IPv4/IPv6 socket.

On FreeBSD, and probably other BSD-derived systems, one should bind explicitly to :: and 0.0.0.0 to cover IPv4 and IPv6. This is probably better. To get this behaviour on Linux, use the setsockopt() IPV6_V6ONLY, or set /proc/sys/net/ipv6/bindv6only to 1.

Tuesday, August 28, 2012

Over the past decade, PowerDNS has become ever more IPv6 compliant, and I think that since a year or so, we fixed every last issue.

So why did it take so long? Just creating an AF_INET6 socket and binding it shouldn't be that hard, right?

Here are some points to ponder:

IP addresses are used for more things than offering services! In other words, once your application can bind() to an IPv6 address, you are not done.

Filtering - if you offer the ability to restrict service to certain classes of IP addresses, your filtering needs to be IPv6 aware

Proxying - if your service can forward requests to somewhere else, that too needs to know what socket family to use

Databases, web services often supply data you need to do your work. And those underlying services too can live on IPv6!

So look not only at each call to bind() but to each call to connect() too!

IPv6 addresses are like IPv4 addresses.. but not quite

Scoped link local addresses. In some circles, it is recommended to bind services to locally scoped addresses, and such addresses look like fe80::92fb:a6ff:fe4a:51da%eth0. You must make sure you use the right API call to translate such a human address representation into a sockaddr_in6. I recommend getaddrinfo(), but pay close attention to its non-standard return codes!

Address lookup - if the operator specifies that queries need to be forwarded to 'downstream.yourcompany.com', did he mean the IP address of that host? Or the IPv6 address? Or both? Or the first one which works? If there are more IPv4 and IPv6 addresses, what to do?

Also, be prepared to lookup the address again. My Android phone tries to talk IPv6 over the IPv4-only cellular network immediately after leaving my home Wifi - which doesn't work!

Ports - for IPv4, it is common to use 1.2.3.4:25 to describe port 25 on host 1.2.3.4. But how to map this to IPv6, which already uses : internally? Commonly used patterns are [::1]:25 or ::1#25. Pick one, both for input and output. ::1:25 is ambiguous.

Some systems have no IPv6 at a very fundamental level. This includes security conscious FreeBSD and OpenBSD users. Make sure that your application can deal with a failure to bind to ::, and also with a failure to even *make* an AF_INET6 socket.

Some systems might not have an IPv4 address, preventing you from binding to 127.0.0.1 or even 0.0.0.0! This is a good test case to see if you really fixed everything.

When writing socket family agnostic code, be aware that some operating systems are more picky than others. For example, Linux will allow you to pass an AF_INET sockaddr with a socket length that is > sizeof(sockaddr_in), for example sizeof(sockaddr_in6). FreeBSD complains about this. So always supply the correct sockaddr length!

sockaddr_in6 contains a lot of fields, some of which are never used. This is unlike sockaddr_in which is completely specified by family, IP address and port. Be sure to zero your sockaddr_in6 before use, as it might start to fail silently later on if you don't.

On Linux, if you bind an IPv6 socket to the IPv6 address ::, by default, it will also listen on IPv4 0.0.0.0! This is unexpected and a bit weird, but it is what happens. If an IPv4 connection comes in over the IPv6 socket, the IPv4 address will be mapped to an IPv6 address that starts with ::ffff. The major issue with this is that you suddenly can't bind anything (else) to 0.0.0.0 anymore on the same port number. Use the IPV6_V6ONLY setsockopt() option to fix this.

If you still use such :: binding to IPv4 and IPv6, make sure that any access filters you have will correctly match ::ffff:1.2.3.4 to 1.2.3.4!

Finally, for PowerDNS it took ages to remove the last 'IPv4-only' vestiges. We only were done once we had audited *each* and *every use* of AF_INET in our source tree.

It is highly recommended to use an abstraction that allows you to specify, pass and print addresses without having to worry about them being IPv4 or IPv6. The PowerDNS ComboAddress might serve as inspiration. Code can be found here. From there you can also find code that helps match IPv4 and IPv6 addresses to NetMasks and NetMaskGroups, for easy filtering.

Finally, unless you truly audit each and every socket operation, you will miss certain corner cases, so do expect some fallout down the road.

Wednesday, May 30, 2012

Linux offers a wide & well known array of options to do asynchronous I/O over sockets, pipes, fifos and other transport mechanisms that can be represented as a file descriptor.

These options include select, poll and epoll. Sadly, these functions all do not work for actual file backed file descriptors.

Because this would normally cause problems, as we like to make writes that are smaller than optimal for disks to handle, the kernel offers a huge cache that effectively allows us to do 'Asynchronous I/O' by buffering giant amounts of writes and previously read data.

As long as processes do I/O on substantially less than the amount of system RAM, this works perfectly.
If however we are doing reads and writes over a working set near or larger than can comfortably be kept in memory, we would often like to take full control over disk access since at that stage, I/O needs to be managed carefully, something that the kernel can not do for us automatically (although it tries real hard).

In many senses, the page cache speeds up reads, but for writes it can be seen as a 'unique queue' that in case of duplicate writes only lets the last one actually happen, but that in the end still has to execute all the writes we've issued.

In that sense, when doing sustained writes exceeding the amount of system RAM, the page cache merely postpones the eventual slowness of writes.

Limitations of "synchronous" I/O

Real synchronous I/O is so slow as to be unuseable for most purposes, and is relegated to the areas where it is truly required.

To understand what synchronous I/O entails, consider that getting a byte to disk requires first reading (at least) the sector that contains the location where we want to write, modifying the sector with our byte, and then commanding the device to actually magnetically write out our in memory & modified copy of that sector. This last bit means moving the disk head to the proper position, and waiting for the correct sector to move under it in its entirety.

As disks rotate at between 100 and 200 rotations per second, you can see that this process takes at least somewhere between 8 and 24 milliseconds - assuming no time is actually spent moving the arm!

This means that in the best case we can do around 50-100 actual synchronous writes per second. If we want to store something directly in a data structure, this typically requires (at least) 2 write operations, bringing us down to 25-50 stores/second. If we are interested in stores with millions of entries, we'd need a day or more to actually fill one.

This is rather depressing since the actual database we'd end up with would be maybe 100MB - in other words, our disk could've written all that within a second!

To solve this without making life too hard, most operating systems, including Linux, default to not actually doing all your writes the moment you ask them to be performed.

Instead, file I/O operates on the page cache (in Linux), and periodically, big batched writes are sent out to disk. Often these batches are large and contiguous, and so can be written out at high speed.

Summarizing, even standard "synchronous" POSIX I/O is in fact not synchronous at all. However, if we want to make this standard I/O synchronous, we can do so by setting a few flags, and we end up with the atrociously slow performance described earlier.

Linux "Asynchronous" File I/O using pagecache control

If we want to achieve full control over I/O (and not be at the mercy of the page cache), we have multiple ways of achieving that. A cheap and fast way is to do I/O as normal, and instruct the kernel explicitly to remove things from the cache ('forget'), or in case things needed to be written out, make that happen more or less when we want it.

Using posix_fadvise(2), madvise(2), plus setting parameters in /proc/vm just right, a remarkable amount of page cache control can be achieved. For many applications, this may be enough. The page cache can thus function as a way to make your calls to write() asynchronous, and your calls to read() be fast enough that you won't care.

Bypassing the page cache with a non-asynchronous API

By means of O_DIRECT, files can be opened to which one can read or write without going through the page cache. This comes with a lot of caveats, for example, data written must be an integral amount of pages, and the memory storing the data must be aligned to a page.

O_DIRECT offers an interesting in between solution - it uses the normal POSIX file system calls, yet it allows for reads and writes to happen without filling the page cache.

In this sense, O_DIRECT offers a lot of control. This comes at a significant performance cost however, because although there is no promise of writes being synchronous, the kernel does not have a lot of scope for buffering your write. Effectively, an O_DIRECT write inserts your data into the device queue, which often contains up to 128 requests. This may or may not be enough to keep your device streaming data at high speed.

Full asynchronous IO

The state of AIO in Linux is not overly pretty. POSIX standardized an API for asynchronous file AIO, and Linux in fact has a library that offers that API. In addition, Linux has a 'native' AIO interface too.

For reasons which are unclear (at least to me), the POSIX standardized API in Linux is, in fact, not implemented on top of the native AIO interface, except on Red Hat systems. The 'mainline' POSIX standardized API appears to be built with helper threads, and according to what I hear, does not deliver the desired performance.

In addition, the native AIO interface, which appears to work pretty well, is supplemented by an odd
userspace library called libaio. Libaio is inconsistently documented, does not have a clear homepage, and I could not even find an authoritative location for its source code.

Asynchronous file I/O is tricky at best, and the state of the Linux libraries makes this worse.
However, in many cases, it is still worth it. On the reading side, the kernel offers no (reliable) other way to indicate a whole bunch of required stretches of data which can be delivered as they arrive. Readv(2) will, for example, only return when all data is available. If one process is serving the needs of multiple clients, this means that all clients have to wait until the last read is in. With AIO, each client can be served as its data comes in.

When doing writes that need to happen at some point, but not necessarily synchronously, the kernel usually offers this via the page cache, but as explained above, when the actual write happens is hard to control, and very large amounts of memory may be consumed buffering data that the application knows could well be written out to disk already.

Finally, when reading and writing from fast SSD devices (especially if they are striped), the page cache only gets in the way, and asynchronous IO is the only way to get top performance.

The native API 'libaio' is a confusing mix built on top of five system calls. Everything starts with the creation of an AIO context using io_setup(2).

To this context, we can submit requests using io_submit(2). It appears that while a request is with the kernel, we cannot reuse or free the memory holding the request, nor the data that is being written. In other words, the kernel does not make a private copy.

To figure out what came of our requests, we can call io_getevents(2), to which we pass the minimum and maximum number of requests we are interested in. io_getevents() returns a vector of results, and these results each include a pointer to our original requests, which is useful both for freeing their memory, and to figure out exactly which request this result belongs to. In addition, the result includes the error code associated with our request, encoded negatively in the 'res' field.

To cancel an outstanding request, we may use io_cancel(2), which might or might not succeed in preventing the request from happening. This can be useful when it is known that more recent data is available that supersedes a previously issued write request.

Finally, to tear down an AIO context, use io_destroy(2).

This concludes the five actual system calls. It should be noted that the system calls, as made available through libaio, do not adhere to common unix semantics of returning negative in case of an error and setting errno.

Instead, these wrappers merely pass on the errno value returned from the kernel. So an error is still indicated by a negative number, but this negative number contains the error code. Errno is not set.

To these five calls, libaio adds a confusing number of helpers, which are not actually distinguished from the system calls in their nomenclature.

Two very useful and obvious helpers are io_prep_pread(3) and io_prep_pwrite(3) which help fill out the parameters of an AIO request to perform a pread(2) and a pwrite(2) respectively. io_prep_preadv(3) and io_prep_pwritev(3) perform analogous functions for the respective vector variants.

In addition, libaio offers infrastructure for 'running' the queue, and doing callbacks for specific events. So, as an alternative to io_getevents(), one can call io_queue_run(), and this will do io_getevents() for you, but also call any callbacks that were attached to submitted requests. To attach a callback, use io_set_callback().

Libaio also contains io_queue_wait(3), io_queue_grow(3), io_queue_release(3), but it is unknown what these calls do. They've been commented out in the header file of more recent releases, but the manpages still refer to them, and the library includes some of them too.

Given the confusion above, it might be better to elevate the native system calls to libc(), probably with a different name, and do without the confusing infrastructure of libaio. Alternatively, it is well possible that libaio might be cleaned up and documented to be less confusing.

Object hierarchy

There is an io_context struct, which is opaque. Requests are submitted as a struct iocb to the io_context. Each request has an opcode, a file descriptor, and a 'key' which could be used by the caller to match up request results to the original request. In addition, there is an unused (so far?) priority field.

Finally, there is a union 'u' containing one of four structs containing details that depend on the nature of the filedescriptor and the opcode. Except for pwritev and preadv requests, it appears that only the 'common' member called 'c' is used, and that this struct contains an offset, a length and a pointer where to put or get the data.

So to set the length of an io request, modify iocb.u.c.nbytes (or use io_prep_pread/pwrite).

When the results come back from io_getevents, they are in the form of an io_event struct. This struct contains an obj pointer back to the original iocb struct. It also contains the 'res' field which gives us the negative of the error code of the request. If it is positive, it tells us how many bytes were written.
The io_event struct also has a 'res2' and a 'data' field, but it is unknown what these do.

Linux native AIO semantics

These are unclear. According to various sources, asynchronous IO should only function usefully on filedescriptors obtained in O_DIRECT mode, but this does not appear to be true anymore. The example code in the io(3) manpage does not use O_DIRECT. It also does not compile without help.

In addition, it is said the Linux AIO does not do 'fsync' like operations, but there is a helper function called io_prep_fsync(3), and a manpage which claims this can be used to queue an fsync() request (through subsequent io_submit()).

To confuse things further, there is also an undocumented request opcode called 'fdsync', but I've not yet reverse engineered what this does (fdatasync?).

It is said that Linux native AIO really is a 'file-only' affair, but the include files do contain infrastructure for socket IO. It is unknown if this is hooked up.

With the call to io_setup, we pass a number of io requests that this context should be able to handle. When we try to submit more, rumour has it that io_submit(2) might block the caller. In my experience, it returns EAGAIN, allowing us to do an io_getevents() run to make space.

Some of the sample code appears to deal with partial writes or reads being indicated in the io_events returned by io_getevents. It is indeed possible to detect such partial writes by comparing the 'res' field of io_event to io_event.obj->u.c.nbytes. It is unknown if such partial writes or reads adhere to POSIX synchronous IO semantics - in which case they'd only happen in disk full sitations, or attempts to read beyond the end of a file or a device.

Mixing Linux native AIO with other I/O multiplexers

It is also said that native AIO does not mix with other I/O multiplexers, but within the kernel and Libaio, we find io_set_eventfd, which appears to associate an individual IO control block with an fd that can be added to epoll, poll or select.

Things that are still synchronous

It has been noted (correctly) that a big downside of the AIO interface is the parts that are not asynchronous. Fileservers and webservers for example don't just read from files, they stat() and open() them too, and these remain fully synchronous operations.

In addition, even the Linux native AIO interface can drop back to being synchronous when asked to perform operations that the kernel is unable to do asynchronously. Examples include writing to certain filesystems, or extending the length of a file or writing to a file not opened in O_DIRECT mode.

The POSIX AIO API

It appear that this API lacks the equivalent of io_getevents(). To get the status of a previously submitted IO request, one can pass an array of all known outstanding requests to aio_suspend() and it will return if any of them have completed already. The function can then be called again with all known outstanding requests etc.

As noted previously, the version shipping as part of glibc is not based on the native interface but emulates AIO with threads. The exception appears to be Red Hat based systems, which do ship with a POSIX API based on the native API (although you need to edit your LD_LIBRARY_PATH to include the rtkaio path to get it). Joel Becker has also written such an adaptor, which is available on http://oss.oracle.com/projects/libaio-oracle/files/

Monday, May 21, 2012

I'm working on some high performance code which needs to scale to many, many CPUs. Against better judgement, I decided to use threads again, but to steer clear from anything that needs locking. I've previously found this to be very hard, and this time round proved to be no exception.

In my "lock free code", the system was still spinning wildly with context switches, plus strace() saw a lot of calls to futex(2), the Linux primitive that underlies most locks & mutexes.

It took me a while to figure it out (why won't gdb set a breakpoint on futex()?), but it turns out that random(3) is guaranteed to provide the same sequence of numbers if it was seeded identically, and that this promise for some reason is kept even in multithreaded setting. In other words random(3) is serialized using a shared state.

And given the amount of (non-cryptographic) random I required, this was a major slowdown in scaling to multiple threads. There are, by the way, at least 80 other 'embedded locks' in glibc, but most (except for stdio) do not appear to be in performance critical code.

In a random()-dominated benchmark, the single-threaded case ran in 1.7 seconds, using 4 threads this exploded to 69 seconds and with 8 threads it clocked a whopping 113 seconds. This all on a 4-core i7 system, while incurring 800k context switches/s.

The reason that this behavior from random() is noteworthy is that random() has long been considered 'not perfect but at least fast', and that when we go multithreaded, this rapidly turns into 'not perfect and slow, but keeping a promise you probably don't value about being predictable for a given seed'. The last bit is especially useless since multi-threaded programs rarely execute in a deterministic manner.

So to keep a promise you don't value, random() became really slow. And in fact, the very random() manpage mentions the non-reproducibility:

"This function should not be used in cases where multiple threads use random() and the behavior should be reproducible."

To be honest, I don't know what else glibc could've done here - the seeding srandom() function has never been specified to initialize the random() system for only one thread, so it must necessarily do so for all threads. And this in turn requires that random() accesses the shared random state.

At the very least, the manpage should contain a big warning that random() can be a source of contention, and that the use of random_r() should be considered if performance is required. I'll be drafting a paragraph for the inestimable Michael Kerris that does just that.

Tuesday, February 7, 2012

Some time ago, Dan Kaminsky mentioned the Secure Remote Password protocol (SRP) on Twitter. As I find certificates to be cumbersome, I'm always interested in solutions to setup trusted communications without them.

Update: this post has been updated based on suggestions by SRP designer, Tom Wu!

SRP is surprisingly unknown for what it offers, so I decided to do a writeup here to help spread the word on what SRP is and isn't, how to implement and what the caveats are.

SRP was designed by Tom Wu while he was at Stanford University. Details can be found from the page linked above, and the Wikipedia page is also informative.

This writeup assumes familiarity with hashes and Diffie-Hellman, but not a lot else.

What is the problem that SRP solves?

To do anything important on the public Internet, we need secure communication channels. Stricter definitions are available, but the core of this is that we have secrecy (other people on the Internet can't listen in) and that we know who we are talking to (other people can't impersonate the endpoint we think we are talking to).

Often we use SSL, in the form of https, to deliver both. The SSL protocol negotiates a session with privacy, and through the use of X509 certificates, makes sure that we are talking to someone with a certificate that matches who we think we are talking to (what this means in the year 2012 is anybody's guess, though).

In addition, once we have setup this secure channel, most of the time we will then log in to a remote system (think gmail, for example). We do so by providing a username and a password.

And here's the interesting thing. The server has provided proof of its identity to us in a way that doesn't allow us to subsequently impersonate that server. However, as users, we are then asked to give that same server our full username and password! If we share passwords, this mean the operator of the server can now try to log in elsewhere with our credentials.

Enter SRP.

Zero knowledge proof of identity with key establishment

SRP rolls up most of the things above. As a user, we can prove to the other side that we are who we say we are (or at least, that we know the password we enrolled earlier). We never have to send the remote our password, not even when we are enrolling. Simultaneously, we get proof that the other side is one that we enrolled with in the past. Finally, we also get a secure session key to encrypt our communications.

And all this without certificates!

Compare this to SSH, where we first setup a secure connection, and then send our full password over that same session. Or CRAM-MD5, which allows us to prove who we are without sending over our password, but which does demand that the server have a plaintext copy of our credentials.

All pretty impressive.

Practical details

With our username, the server stores a 'verifier' of our credentials, plus the salt used to generate this verifier. As is usual, we deal with hashes of the password, and not the password itself. The verifier however is even further away from the password, and consists of the result of a one-way cryptographic operation on the hash.

Password + Salt -> Hash -> Verifier

We never need to tell the server our password if we don't want to, we can just send them a random salt and the verifier (which we can calculate) when we enroll. Quite often though, we'll provide the password just once, though, and have the server calculate the salt thence hash & verifier.

When we want to setup a secure authenticated session, we connect to the server and tell it our username. In return, it reminds us of the salt used in generating the hash.

The user now enters the password on his own computer, which subsequently calculates the hash based on the salt. This hash constitutes our secret key. The verifier stored on the other side performs the role of a public key.

Next, a modified Diffie-Hellman key exchange is performed, based on the public and private keys derived from our password. If the server verifier is indeed related to our client hash, both parties will end up with a shared secret which, once hashed, forms our session key.

A final message exchange based on the shared session key is then performed, and if the session key is in fact identical on client and server, this will prove that the verifier and user password match.

And we are logged in, and have an authenticated secure shared key!

Downsides

There are a few downsides to the above. For one, we have to send out our username in plaintext at the very beginning of the session. An eavesdropper can derive from this who we are claiming to be.

Secondly, while the server is 100% assured we know the password associated with the username, we are in turn not 100% sure if we are connected to the right server. We only know we are connected to someone who has access to our username and password (knowing only the verifier is not enough).

The last bit basically means that if someone knows our username and password, he'd be able to impersonate the server you connect to, and transparently send everything on. But then again, if he knows your username and password, he could just log in as well.

If an attacker only knows the verifier, he can wait for a user to connect, and pretend to be the intended server. However, the attacker can't setup a transparent connection to the authentic server, so you'd have an authenticated connection to a server that does not have the contents of the original server.

Patent situation

Update: this section has been updated after further research. The previously mentioned patent is predated by the publication of SRP by a wide margin, so it does not apply.
Various sources mention that there might be a patent issue with SRP, but there is nothing concrete to point to. There is a Stanford patent, but it has been licensed in such a way that it does not hinder SRP usage in any way. It might in fact protect it.

When the SRP RFC was initially written, two firms made IPR disclosures, but these have not been followed up on. When RFC 5054 was published, which also describes SRP, there were no further IPR disclosures.

SRP has by now seen quite some use, if there were any applicable patents out there that were still valid, we would probably have heard from them already. If anyone knows of specific patents that apply, please let me know. It would be good to put this to rest.

The balance

SRP is pretty cool, and delivers quite a lot of 'bang for the buck'. It is way better than many other solutions, which either require plaintext passwords on the server, or require us to hand over our password to the server every time we log in.

In particular, it appears SRP should have been used in WPA2 instead of what was used.

However, SRP falls short of having 'real' authentication. If an attacker knows the password, he can effectively pretend to be the remote server. And even though an attacker with the correct password was pretty powerful already, this is a downside.

The usecase for SRP is therefore for people who'd like to have better security than just passing around passwords, but don't want to go through the hassle of doing certificate administration.

Potential improvements

If we could keep the salt a secret, it could function as a poor man's authenticator. The server does not need to know the salt, as long as it has the verifier. The salt itself could then be used as the 'public' key of the remote.

Implementation notes

The best place to start reading is the SRP Design Page. There is a also paper which describes the math, but not the practicalities. RFC 5054 meanwhile describes how to do SRP within SSL/TLS, and Tom Wu suggests using that if possible. If you do SRP outside of SSL/TLS, it appears wise to follow the practicalities in of SRP-TLS (mostly related to padding) anyhow, for interoperability's sake.

Sunday, January 15, 2012

Quick post as I have no time to work on this for now. Ages ago I read a book, I think by Arthur C. Clarke, where powerful atomic bombs were used to generate radar pulses so powerful, the return signal was used to map the entire solar system in one go. The grandeur of this vision impressed me a lot, and I hope that one day we can do it. (Btw, if anybody knows the name of the book, please share!).

If you send out one powerful 'ping' of radar signal, and only measure the strength of the return over time, you don't get a good picture - you learn how much reflection you get, and from how far away (based on the delay). But you don't get the angle. This is why 'real' radars rotate, so they can sweep the sky. (I know there are other reasons).

One of the 2012 goals for the PowerDNS Recursor is to become the DNS resolver with the best perceived experience for the end-users. This means not so much the highest performance (in terms of hundreds of thousands of queries/second, the usual metric), but in terms of getting the best answer to the user within the shortest amount of time.

In doing the math on this challenge, I needed to know how the response times of authoritative DNS servers are distributed, so I instrumented the PowerDNS Recursor to graph this while I sent it runs of 2,000,000 questions from a list of the most popular domain names. I was naively expecting some sort of Poisson distribution centered around 150ms.

But lo and behold, I got this graph. From this you can see that around 10% of answers came in beteen 1 and 2msec. But this graph isn't any kind of nice smooth distribution at all, and I should have realised that.

(the graph contains two runs, called 'plot' and 'plot.1', plus the combination of both runs in Blue)

The speed of light within a fiberoptic cable is around 200,000km/s, and because the answer needs to come back too, this equals around 100km per msec. So if you multiply the y-axis by a hundred, you get a very rough measure of the distance of all authoritative servers queried. And servers are not distributed smoothly! They tend to cluster around hotspots.

So what are these peaks? Well.. the first one turns out to be mostly ANYcasted servers present very closely to xs.powerdns.com. A secondary peak (24ms) appears to be Milan (actual distance: 1000km, but we lose 20ms somewhere within Level3 for no apparent reason), hosting an instance of a.dns.it, plus an instance of b.gtld-servers.net in Stockholm (actual distance: 1500km).

The big void between 50ms and 75ms might correspond to the Atlantic Ocean.

The peak around 84-87ms matches closely with the East Coast of the US, whereas the somewhat broader peak beyond 158ms might well be California. Or Asia!

250ms is about what you'd expect from Australia, the peak from 350ms might again be Australia, but then 'the wrong way round'.

These analyses are very tentative, but I've now seen the same result on 4 different datasets, one of which was measured a few years ago and based on very different techniques, but gave the same result.

The dataset used to generate the graph above can be found on http://xs.powerdns.com/dns-radar, the format is "microseconds errorcode remote-server:port domain-name".

I might publish more details on how to reproduce later, but for now, I thought this was cool enough to share!

Tuesday, January 10, 2012

Most PowerDNS Authoritative Server versions < 3.0.1 (with the exception of the just released 2.9.22.5)

Not affected

No versions of the PowerDNS Recursor ('pdns_recursor') are affected.

Severity

High

Impact

Using well crafted UDP packets, one or more PowerDNS servers could be made to enter a tight packet loop, causing temporary denial of service

Exploit

Proof of concept

Risk of system compromise

No

Solution

Upgrade to PowerDNS Authoritative Server 2.9.22.5 or 3.0.1

Workaround

Several, the easiest is setting: cache-ttl=0, which does have a performance impact. Please see below.

Affected versions of the PowerDNS Authoritative Server can be made to respond to DNS responses, thus enabling an attacker to setup a packet loop between two PowerDNS servers, perpetually answering each other's answers. In some scenarios, a server could also be made to talk to itself, achieving the same effect.

If enough bouncing traffic is generated, this will overwhelm the server or network and disrupt service.

As a workaround, if upgrading to a non-affected version is not possible, several options are available. The issue is caused by the packet-cache, which can be disabled by setting 'cache-ttl=0', although this does incur a performance penalty. This can be partially addressed by raising the query-cache-ttl to a (far) higher value.

Alternatively, on Linux systems with a working iptables setup, 'responses' sent to the PowerDNS Authoritative Server 'question' address can be blocked by issuing:

If this command is used on a router or firewall, substitute FORWARD for INPUT.

To solve this issue, we recommend upgrading to the latest packages available for your system. Tarballs and new static builds (32/64bit, RPM/DEB) of 2.9.22.5 and 3.0.1 have been uploaded to our download site. Kees Monshouwer has provided updated CentOS/RHEL packages in his repository. Debian, Fedora and SuSE should have packages available shortly after this announcement.

For those running custom PowerDNS versions, just applying this patch may be easier: