Category Archives: Internet Engineering Task Force

I am happy to report that this web site now supports the new HTTP/2 protocol. HTTP/2 was standardised in RFC 7540. For the most tech savvy, they can turn to the Wikipedia entry that describes the protocol with more detail. Suffice to say that it aims at making web sites load faster.

However, there is one requirement that is still difficult to honour for individual, non-commercial sites like this one. HTTP/2 requires the connection between the web server and the browser to be encrypted. For this, one needs a SSL/TLS certificate, which can cost quite some money. Starting 16 November 2015, the Let’s Encrypt project will issue free SSL/TLS certificates, trusted by all browsers. This will be a serious boost for HTTP/2. This web site is part of the Let’s Encrypt Limited Beta, meaning it can already support HTTP/2.

The EAI working group of the IETF has finished (part of) its work on the interationalization of e-mail addresses. This, together with Internationalized Domain Names (IDN) will make it possible to send e-mail messages to non-7 bit ASCII addresses e.g. måtte@københavn.dk or 中国@中国.中国 .

They still have the “Experimental” status, meaning they are not yet a standard. How long this will take to see them in actual products is difficult to guess. Software vendors tend to look at market demand before implementing new features . Hence, it is time to pressure your favourite e-mail client vendor. Tell them you need that. For Microsoft Outlook, you could try here. For Apple Mail, there. For Mozilla Thunderbird, still somewhere else.

One possible issue may be with vanity gTLDs like apple, ebay etc. Some expect that every Fortune 1.000.000 company will apply for its own TLD. My guess is rather the Fortune 1.000 for a start, but this does not change the nature of the issue, ie. those companies may want to use email addresses like user@tld.

The current standard is defined in RFC 2821 as such:

2.3.5 Domain

A domain (or domain name) consists of one or more dot-separated components.
[…]
The domain name, as described in this document and in [22], is the entire, fully-qualified name (often referred to as an “FQDN”). A domain name that is not in FQDN form is no more than a local alias. Local aliases MUST NOT appear in any SMTP transaction.

Hence, if either the mail client or the MTA expect to see a dot in the domain name and there is none, its behaviour may be unpredictable. The new gTLD context is addressed in the draft RFC2821bis, which states:

2.3.5. Domain Names

A domain name (or often just a “domain”) consists of one or more components, separated by dots if more than one appears.

There may be a lot of software out there that would treat user@tld as a local e-mail address (i.e. not a Fully Qualified Domain Name). It is not unusual to still find inside company data centers old internal SMTP gateways which have been quietly doing their job for a long time and were not updated for years.

Some pointed out on the IETF list and elsewhere that we have had for 10 years a ccTLD that accepts e-mail in the form of user@ai. It is one thing that the behaviour of a small ccTLD apparently generated no complaints. It is another that a large number of companies may want to force the Internet to adapt to their advertsing strategy. At this stage, we have no meaningful statistical evidence that the currently deployed software is able successfully deal with e-mail addresses that are directly under a TLD. I am not aware of any study by ICANN’s SSAC on that matter.

In any case, when ICANN will go into an agreement with the registries operating the new gTLDs, it has to be very clear that compliance with existing technical standards is a must, and not respecting them would be a breach of contract.

It would be problematic for the end users/customers/consumers if companies started advertsing e-mail addresses like support@mycompany , if the delivery of the e-mail depends on the ability of some software to be non-standards compliant.

On a related note, my colleague Franck Martin pointed out to me last Friday that browsers usually append “.com” to any domain name they consider incomplete. Again, this is going to break a lot of software that have hard-coded lists of TLDs. Similarly, there are also millions of web forms out there that check for malformed e-mail addresses based on the presence of a dot and/or hard-coded lists of TLDs.

There is currently a discussion going on between Milton Mueller and Patrik Fältström over the deployment of DNSSEC on the root servers. I think the discussion exemplifies the difficult relation between those who develop standards and those who use them.

On the one hand, Milton points out that the way the signing of the root zone will be done will have a great influence on the subjective trust people and nation states will have towards the system. On the other hand, Patrik states that “DNSSEC is just digital signatures on records in this database”. Both are right, of course, but they do not speak the same language. It is just like saying that a spam e-mail which is RFC (2)822 compliant is a legitimate one. From a technical point of view, it certainly is. From a social point of view, it is still an annoyance.

There is this often expressed feeling in the engineering community that technological choices are politically neutral by design. Nothing is further away from truth, as has been demonstrated by people like Lawrence Lessig. The development of standards is done exclusively by companies. Notice, for example, that those attending IETF meetings do it on company time and budget. The actual users are absent. The logic that says that IETF meetings are open to all is flawed by the fact that an average IETF meeting will cost you around $1500 to attend. Hence, there is an economic barrier to the participation of individuals. Additionally, the influence you might have on a process is proportional to the consideration you get from your peers. Newcomers need quite some time to get accepted by the community, especially if they are not engineers.
Companies are driven by the market. If there is no potential market, there is no need to develop a new standard. A good example of this is the fact that you cannot yet send an e-mail to, say, brønshøj@københavn.dk or addresses in native Cyrillic, Arabic or Asian scripts. Pretty soon, the right hand side will be dealt with, thanks to IDNs. But the use of non-ASCII character sets on the left hand side is still a not standardized. The EAI working group in the IETF has only been launched a few months ago. Why did it take so long ? I guess that the need for this has only appeared in recent years. As long as the Internet was mainly used by the American / Western European world, being restricted to 7 bit ASCII was not much of an annoyance, if at all. Now that the user base has enlarged to include countries that do not use the latin alphabet, it becomes a hot topic. However, it will take years before this can be implemented in the software we use every day. Notice, for example, that most operating systems today still require the user name to be in 7 bit ASCII.

Similar issues exist with RIRs, where again the actual IP address users are absent for the same set of reasons detailed above. However, which IPv6 prefix is going to be allocated by your ISP to your home network in a few years from now is an important one. Yet, those who are active in policy development at the RIR level are those very ISPs. The policy will be related to their commercial interest, which may – or may not – match the user’s interests.

End users are represented in ICANN. I am the first to admit that ALAC may be far from perfect, but it has the merit to exist and we can improve it. Isn’t time for a similar concept for the IETF, the RIRs and all those bodies that have a crucial effect on our user experience while using the Internet ? Being closer to user needs, without the filtering of the marketing department, may help prioritize the future developments.

Worth reading and studying: The Cooperative Domain Service (CoDoNS) by Venugopalan Ramasubramanian and Emin Gün Sirer, a paper by two scientists at Cornell on a distributed system to replace our good old DNS.

From the abstract: “This paper describes the design and implementation of the Cooperative Domain Name System (CoDoNS), a novel name service, which provides high lookup performance through pro-active caching, resilience to denial of service attacks through automatic load-balancing, and fast propagation of updates. CoDoNS derives its scalability, decentralization, self-organization, and failure resilience from peer-to-peer overlays, while it achieves high performance using the Beehive replication framework. Cryptographic delegation, instead of host-based physical delegation, limits potential malfeasance by namespace operators and creates a competitive market for namespace management. Backwards compatibility with existing protocols and wire formats enables CoDoNS to serve as a backup for legacy DNS, as well as a complete replacement. ” (bold added by yours truly).
More info, including a FAQ at http://www.cs.cornell.edu/people/egs/beehive/codons.php .