When you name a program SATAN, you can expect your intentions to be misread. Wietse Venema discovered this firsthand when he and colleague Dan Farmer released the Security Administrator Tool for Analyzing Networks, reporting software designed to let administrators test their own networks for vulnerabilities, but immediately misconstrued as a toy for budding crackers.

There's little chance that mistake will be repeated. Venema's name has become synonymous with security in the minds of sysadmins worldwide, thanks to his work on SATAN,
TCP Wrappers, and a host of other tools to keep the scriptkiddies at bay. This work hasn't gone unnoticed: at the LISA '99 conference last November, Venema received the SAGEOutstanding Achievement Award, an honor previously bestowed upon the likes of Paul Vixie and Larry Wall.

The other thing Venema's famous for, of course, is Postfix, the mail transfer agent he wrote after coming to IBM's Thomas J. Watson Research Center from the Netherlands. Known briefly by the name "VMailer," Postfix aims to be "fast, easy to configure, and hopefully secure." We spoke with Venema by phone about Postfix, security, and the superiority of asynchronous communication - i.e., email.

Hopefully those will be in the next official release. I've received several donated pieces of software from people who use my Postfix program, and who have added things like authentication and encryption. I'm in the process of merging that donated code into my software.

Do you foresee any legal issues with open sourcing that technology once it's been integrated?

There's no longer an issue in doing this from the United States, because as of January 15, it's legal to export open source code with encryption functions. You only have to send a notice to the Bureau of Export Administration.

Do you have any particular plans for September 20, the day the RSA patent runs out?

[laughs] Well, it expires several times, doesn't it? I think the US and Canadian expiration dates are different.

I have no specific plans with respect to RSA. In any case, the encryption would not be shipped with Postfix. All I would provide is a version of Postfix where you can throw in the encryption source code. I don't think it makes sense to put a copy of all the SSL and related stuff inside of Postfix.

Just in terms of code bloat and efficiency?

Exactly. And if somebody else fixes a problem in SSL, then I won't have more work to do. [laughs]

You seem to have plenty to do as it is. What projects are you working on right now besides Postfix? Last I heard, you were porting TCP Wrappers to IPv6. Is that done already?

No, that still needs to be done.

How's it going?

Slowly. It's a problem, because by now lots of vendors are shipping systems with IP version 6 support. Postfix has taken a lot of my attention, and of course there are other things too, like the forensics tools I've been working on with Dan Farmer. We gave a free, full-day class on computer forensics last August and promised about six tools. The tools were given to the attendees as a beta release, but we really want to finish a first public release. I was actually hoping we would do it in April.

Can you describe those tools in more detail?

In our class, we explained how to get evidence after a break-in. There are all kinds of little bits and pieces of information that stay behind on a system when it's being used. And it's all very volatile, because every bit on a computer will eventually be overwritten, so you have to be really careful to recover the information or else it's gone forever. The most spectacular tools we have are for recovering deleted files. And they turn out to work much better than we expected.

You used to say you wrote every single packet you received to disk as insurance against crackers. Is that still true?

On my previous machine in the Netherlands, I used to do that. I'm not doing that currently, but I am in the process of reinstalling this functionality again.

Do you consider that good policy for a heavily targeted site?

Well, that depends on how much disk you have and on the bandwidth of your network. When I connected my own machine to the Internet a couple of years ago, I thought, "Well, maybe some people will attack me. How will I find out?" One possibility was to record everything that came over my network to a disc on a machine that no one could break into, and every now and then just look at the logs. Even if somebody managed to break in, it would be recorded and I could still see what happened and what the damage was.

Well, the unfortunate thing is that nothing ever happened. So I recorded several years of "nothing happened." [laughter] I didn't do this when I came to the US, but I'm going to do it again, just as a kind of insurance.

I post to it infrequently, and I follow it from a distance. I don't know if you've noticed, but there are several new vulnerabilities every week. I don't think anybody on this planet is really keeping up to date with all these vulnerabilities. [laughs]

When we interviewed Paul Vixie some months ago, he pointed out that Internet protocols simply were not designed to deal with malice. They were designed for civilized people who behaved in a civilized way - friends, fellow researchers, colleagues. Do you think vulnerability is built into the protocols of the Internet? To what extent is IPv6 successful in solving that problem?

It's very difficult to build protocols that will survive and do well in an unfriendly world. By definition, you're trying to connect systems to each other and exchange information between them. It's really hard to design protocols that let you connect your machine to the network and still be immune against malice. On the other hand, it's very easy to screw up and design protocols that act as amplifiers for the kind of attacks we've seen in the last couple of years.

What elements of the Internet protocol do you see as having specific potential to amplify attacks?

One very well-known attack is where I send one IP packet to a broadcast address on your network, and then every machine will respond. So if you have a network with one hundred machines, all I have to do is to send one packet and they will send one hundred packets.

So you have instant denial of service.

Yes. If I can spoof my sender address, now I can suddenly, with very little effort, make other machines do a lot of work for me and do DOS-like attacks. You could do the same thing with application levels in protocols. But the broadcast is a very popular example.

How about IPv6? Does it have similar vulnerabilities?

IP version 6 uses encryption and authentication - and again, it turns out to be very difficult to design the cryptographic protocol such that a client cannot force the server into doing a lot of expensive computations.

Which again results in denial of service?

Exactly. Encryption is computationally intensive. So by forcing responses on compute-intensive processes, you can inflict a processor-level denial of service. The server has to do a lot of cryptographic operations, only to find out that the client was just sending it garbage.

Do you see a way around that?

I have no solution at hand. It is a very difficult problem.

In your Bugtraq posting about the TCP data corruption problem, you mentioned that IPSEC's limitations on traffic manipulation have provoked some controversy.

Yes, that's true. In fact, Steve
Bellovin at AT&T is working on a proposal - at least, I understand he's working on something along those lines. People do want to see some information about traffic. They do want to do traffic analysis. If I'm a network provider, I want to know what kinds of traffic are going over my network so I can do capacity planning. If all the traffic is totally encrypted, it's very difficult to find out what's going on.

The problem we were dealing with in the Bugtraq posting was a bandwidth management system that just changes a few parameters in TCP headers so that the traffic flows more smoothly. Those are things that you simply cannot do when all the data is protected by digital signatures and such. So there are several conflicting requirements, like people wanting to be able to see a bit more of traffic than they would be able to see, and people actually wanting to do some management of the traffic, like bandwidth allocation.

So there's an ease-of-administration tradeoff for the security that you get. The encryption has an administrative cost.

Yes, the encryption makes it impossible to do certain operations. Now, from the point of view of security, this is exactly what you want. All you want is to send your data to the other machine, and it should be sent unchanged: no man-in-the-middle attacks by "helpful" intermediate systems. So these are conflicting requirements.

You've been associated with IBM for a couple of years now, right? Did you come in '96, '97, '98?

Yes, yes, yes. [laughs] The answer is "yes" in all three instances. I came in 1996 as a visitor, and then I stayed a bit longer, and then I joined IBM. So yes, the whole process extends from 1996 to the summer of 1998.

Do you have pretty broad latitude in terms of the work you do there?

Yeah, this is research. So I have a lot of liberty.

Beyond giving you the liberty to work on it, is IBM supportive of Postfix and the other software you produce? Do they actually help you get it out there to people? Do they help you stay in touch with users and so on?

Well, it's pretty much up to me, I think. You have to coordinate a few things with people, but it works fine. I can do a lot of things. [laughs] I could probably do more.

How do you see the relationship between open source development and the commercial marketplace? Companies like Red Hat and Sendmail have married the two to create a revenue stream that can better support open source development. Does this seem like a valid approach to you?

It seems to make sense. I don't have much to say much on this subject, but it seems to make sense.

You've described the creation of Postfix as a response to sendmail and an attempt to create an alternative to sendmail. What do you think about cooperation versus competition in the free software world? Is variety a good thing?

I think it's beneficial when multiple implementations coexist. Not too many, because that confuses people. But a number of implementations can coexist, because it gives people more choice.

What about sharing features and ideas? Eric Allman has told me on several occasions of the high regard he has both for you personally and for your work on Postfix. Are there things that you've learned from sendmail and implemented in Postfix? Are there things sendmail can learn from Postfix?

The same thing happens with everybody who implements a system from the ground up. Just as Eric Allman had to solve problems as he was building sendmail over time, I ran into several problem instances myself and tried to solve them. And then I looked at how sendmail solved it, or how other systems solved it. Sometimes you find your solution is better, and sometimes you find your solution is worse. Sometimes you find a security problem in the other person's solution. All these things happen. But the end result, of course, is that people will have better software.

So you wouldn't be offended if Eric looked at Postfix and said, "Hey, what a great idea, Wietse. I think I'll steal it."

Not at all. I stole some ideas from qmail. And of course, I stole the user interface from sendmail. But that was also for compatibility reasons.

I guess you have the same problem whether you're developing a new application or doing a new version of an existing application: you have to keep from scaring users off by changing things too fast.

Yes, and in fact, these things become a problem very quickly. Once you give your software to other people, that's where it starts. You can no longer just totally change the software anymore, because people have become dependent on it.

Are you planning to implement message tracking in Postfix, at least once there's a standard protocol?

Well, if there is a standard. I really don't like to invent my own wheels. But if people need this functionality, it will eventually be part of the mail system, yes. But you really need a standard that goes across multiple systems. Because mail usually goes from one system via a few other systems to its final destination, and if you want to do tracking, all these different systems need to be doing something comparable.

As you look forward at the future of email and Internet communication generally, aside from the obvious things - many more users and much greater security issues - what do you see as the forces shaping the evolution of these technologies? What direction do you see email going in, beyond simple one-to-one messaging?

I'm actually surprised that email still works so well, and works the way it does. Because it hasn't changed fundamentally over the last couple of decades. It is still doing the same thing, I believe. Except that it's reaching a lot more people. But the basic model hasn't changed at all.

Do you see the possibility for that model to evolve?

It's possible, but it hasn't really evolved at all. Instead, what you see is new applications coming up and, side by side, email. There is the World Wide Web - and email has, of course, inherited some of its features. But the basic model is still the same: I sit at my screen, I do stuff, I send it to you. You receive my message and read it from your screen. And the two actions are completely disconnected. That's the basic character of email. If people want realtime conversations, they have lots of choices already.

Many people see email's asynchronous aspect as a big advantage. It gives you a rhythm of communication that's much more appealing in certain ways.

Absolutely. I prefer email over telephone calls anytime. Not that - [laughter]

That sounds like a good note to end on.

So maybe I had a personal interest in having a reliable mail system. [laughter]