The Emergent Chaos Jazz Combo

Main menu

Post navigation

Rebuilding the internet?

Once apon a time, I was uunet!harvard!bwnmr4!adam. Oh, harvard was probably enough, it was a pretty well known host in the uucp network which carried our email before snmp. I was also harvard!bwnmr4!postmaster which meant that at the end of an era, I moved the lab from copied hosts files to dns, when I became adam@bwnmr4.harvard…wow, there’s still cname for that host. But I digress.

Really, I wanted to talk about a report, passed on by Steven Johnson and Gunnar Peterson, that Vint Cerf said that if he were re-designing the internet, he’d add more authentication.

And really, while I respect Vint a tremendous amount, I’m forced to wonder: Whatchyou talkin’ about Vint?

I hate going off based on a report on Twitter, but I don’t know what the heck a guy that smart could have meant. I mean, he knows that back in the day, people like me could and did give internet accounts to (1) anyone our boss said to and (2) anyone else who wanted them some of this internet stuff and wouldn’t get us in too much trouble. (Hi S! Hi C!) So when he says “more authentication” does that mean inserting “uunet!harvard!bwnmr4!adam” in an IP header? Ensuring your fingerd was patched after Mr. Morris played his little stunt?

But more to the point, authentication is a cost. Setting up and managing authentication information isn’t easy, and even if it were, it certainly isn’t free. Even more expensive than managing the authentication information would be figuring out how to do it. The packet interconnect paper (“A Protocol for Packet Network Intercommunication,” Vint Cerf and Robert Kahn) was published in 1974, and says “These associations need not involve the transmission of data prior to their formation and indeed two associates need not be able to determine that they are associates until they attempt to communicate.” That was before DES (1975), before Diffie-Hellman (1976), Needham-Schroeder (1978) or RSA. I can’t see how to maintain that principle with the technology available at the time.

When setting up a new technology, low cost of entry was a competitive advantage. Doing authentication well is tremendously expensive. I might go so far as to argue that we don’t know how fantastically expensive it is, because we so rarely do it well.

Not getting hung up in easy problems like prioritization or hard ones like authentication, but simply moving packets was what made the internet work. Allowing new associations to be formed, ad-hoc, made for cheap interconnections.

So I remain confused by what he could have meant.

[Update: Vint was kind enough to respond in the comments that he meant the internet of today.]

10 thoughts on “Rebuilding the internet?”

Me too, but I just assumed that either:
1.) He was much smarter than I and had a model that I couldn’t imagine he was referencing without details
2.) He didn’t have a model in mind at all, but wishes he had taken the time to really consider the probability that lack of auth would become a significant problem.

I have only a moment to respond. The point is that the current design does not have a standard way to authenticate the origin of email, the host you are talking to, the correctness of DNS responses, etc. Does this autonomous system have the authority to announce these addresses for routing purposes? Having standard tools and mechanisms for validating identity or authenticity in various contexts would have been helpful. Digital signature technology can help here but just wasn’t available at the time the TCP/IP protocol suite was being standardized in 1978.

I have only a moment to respond. The point is that the current design does not have a standard way to authenticate the origin of email, the host you are talking to, the correctness of DNS responses, etc. Does this autonomous system have the authority to announce these addresses for routing purposes? Having standard tools and mechanisms for validating identity or authenticity in various contexts would have been helpful. Digital signature technology can help here but just wasn’t available at the time the TCP/IP protocol suite was being standardized in 1978.

Wow, Vint is so cool, even his blog comments have redundancy 🙂 Must be nice knowing that such a nice (I’ve met him breifly once, and he was conversational even though he didn’t know me from, well, Adam) and visionary guy reads, and takes time to comment to, this blog 😀
Anyway, I did pop on to comment, but I think Dr Cerf summed it up nicely. I think he would have liked to have done several things, given the possibility if it could be revisited now, but at the time it was kinda “make do” – I don’t believe that the threat(s) we have now were even considered in the begining, and it’s difficult (and in a way incorrect) to put in mitigations for a threat that does’t exist or on the horizon.

I’ll make the same point here that I made on Gunnar’s blog and in separate emails….
The question here isn’t necessarily of end-user authentication. The question/problem is about network-level authentication. None of the protocols even have the idea of authentication data, much less what it should look like.
When you look at protocols such as IPv4 UDP, there is precisely no authentication data,, and spoofing without explicit network rules is entirely possible.
When you look at a same layer protocol such as IPv6, spoofing of off-subnet addresses is not possible in the same way, because of how the design works.
This is, in my opinion, a step forward. Perfection, no. Some basic ability to tell where packets came from = good.

“Digital signature technology can help here but just wasn’t available at the time the TCP/IP protocol suite was being standardized in 1978.”
Right, but that’s just the tech. Unfortunately, it isn’t the meaning of authentication or authorisation or any other auth-shun. There is no way to code this meaning, it’s human. Consequently, it is inevitable that good auth schemes have to be designed for each application, where the humans are, and customised heavily and differently for each application. For this reason, there probably can’t be “a standard way to authenticate” and while many try, they only seem to succeed in re-learning the same lesson: push the auth to the top layer.