from the capabilities-or-culpability? dept.Bruce Schneier, author of the standard reference Applied Cryptography, has a new book out called Secrets and Lies. In an interview in Salon he talks about the book's main thesis: that secure computing is impossible: "Given the inevitability of attacks, 'prevention' can no longer be the security buzzword. Just as even the finest hockey goalies must regularly suffer the humiliation of allowing a goal, companies must learn to live with penetrations. Prepare for the worst, Schneier urges." Has the man never heard of capability security?

This entry was posted
on Thursday, August 31st, 2000 at 10:12 AM and is filed under News, news.
You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.

16 Responses to “Schneier: computer security is impossible”

… but capabilities are not a panacea. It's harder to leak privilege in a capability system, but it's not impossible. There will always be bugs in programs. Capability systems have some great properties, but I suspect that one reason for their strong track record is that they've always been worked on by the clued. Give one to Microsoft, and you'll have serious breaches in no time.

Actually, I'd like to see multiple independent mechanisms. Imagine a system that combined capabilities, with, say, something like MLS, so that even if you did leak a capability, it would be useless to the recipient unless that person also defeated the other measures.

… and, of course, it's not clear that anybody's actually prepared to do all the reimplementation effort to actually deploy anything really secure, anyway. It's not so much that security is hard, but that you have to build it in from the ground up.

That said, I think that Schneier does go too far in giving up on prevention. Also, he seems to be joining up with the intrusion detection camp… and, although intrusion detection looks appealing at first, once you learn about it, you discover that it has the same holes as prevention. Except for things like DoS, it's not much easier to detect an attack than it is to prevent it.

Disclaimer: this is based on other stuff he's published recently; I haven't read that particular interview.

When the singularity comes to pass (or, at least, a time when Moore's Law goes on steroids for those of you in opposition), keeping computers secure will be very difficult. Remember when 56 bit private key encryption was enough, and now anyone with enough processing power can easily break them. In the future, quantum computers will make many current ciphers moot (like RSA). Similarly, DNA processors will shoot a hole in evolving ciphers. With the singularity, almost any key will fall at, if nothing else, brute force attacks. Post humans will find that the only way to keep anything secret will be to not transmit it (the same applies already, to a lesser extent, because there are levels of ciphers that cannot be broken within a reasonable time frame).

This assumes that such supercomputing and evolution won't reveal some much more powerful secrecy techniques. Cryptography isn't about keeping information secret, it's about keeping information secret for a certain length of time.

Even if a quantum supercomputer can decipher a trillion bit key in one second, then a 500 trillion bit key should take it 500 seconds. Any computer that can decipher a trillion bit key should be able to generate keys that are orders of magnatude above a trillion bits.

Even assuming "The Singularity" had occured, such transinformation systems will likely be able to come up with ways to keep secrets from others. If one is going to argue for a limitless system, then one musn't attach limits to that system in one's arguments.

Heh.. but, actually, it occurs to me that I probably shouldn't have included the traditional dig at Microsoft. Although they've had plenty of problems, they don't seem all that different from the rest of the software industry. Everybody is screwed up. I have horror stories about supposedly professional programmers at all kinds of places.

It seems to me that all these "integrated desktop environments" with lots of weird objects running around, built by people who haven't really thought or read seriously about security, on frameworks whose security is an afterthought at best, are about like Windows in their security.

A big part of the reason things are this way is that a lot of software is written by people who are, by historical standards, half trained. And those people are being pushed to get stuff to market quickly at all costs.

Until more programmers really understand the issues, until all the tricks for avoiding problems are second nature, and until people are given the time and resources to do things right, these problems will be with us. If those things ever do happen, we'll be able to implement all the goodies we already know about, but until then the technology won't help that much anyway.

MS gets higher NSA orange ratings than Open BSD, but to get them NT has to have almost ALL of its services *turned off*!!! Open bsd is the most secure OS around, flat out. Its really incredibly annoying that capitolizing bsd made this post fail the lame filter!

Go on IRC. #Legions. Make an ass of yourself. Wait to get r00ted. Open BSD is almost entirely free of buffer overflows, and even the swap space is heavily encrypted…. Perhaps I should have said FREE OS , for all the pedant-heads here…

Open BSD is almost entirely free of buffer overflows, and even the swap space is heavily encrypted….

OK, but that's not nearly everything. If you run a Web browser on it, you're not going to be safe from buffer overflows. It may be tough to get root, but that's small comfort if they can still do anything they want to all your user-level data.

Once again, it's not that I don't think OpenBSD is a good thing. If I were setting up a Web server I really cared about (as opposed to the toy one I actually run), then I'd probably use OpenBSD on it.

Even so, the fundamental security model of OpenBSD is still a UNIX model, in which it's harder to do things securely, and easier to blow the security of the whole system by installing one ill-advised program, than in some other models. OpenBSD still has the concept of "root", for example, and it still grants access to resources at the very coarse-grained level of user IDs.

Of course, as I've been saying, it's pretty easy to screw up in any model. Even a pure capability system will eventually fall apart if you drop it into the environment in which most of today's software is written.

Actually, OpenBSD is an interesting case. Here we have people who really are paying attention to security, and really have thought about the issues, but they're still forced, by installed-base realities, to keep fixing bugs in the fundamentally flawed Unix model, rather than starting fresh.

Perhaps I should have said FREE OS , for all the pedant-heads here…

EROS is GPLed, and has been for a long time. How much more free do you want it to be? Of course, no apps run on it… and porting them in such a way as to preserve any fine-grained security would be a real pain.

From the point of view of evolving systems, the constant of no security being perfect will be a powerful engine for innovation. I never saw hacking or hackers as a detriment. Instead they keep systems from stagnating. They force improvements and creative solutions in programming of security. Intern an improved security system will only inspire innovative ways in cracking it. This will only benefit us in helping technology as a whole improve. I feel that the best security system will be one that is completely dynamic. One that continually changes and reprograms itself. One that plans on being cracked and acts accordingly. Most likely some sort of AI will sooner or later be created for this purpose. An intern an AI will be created to defeat it. Thus an evolving system born from a simple conflict.

Even if a quantum supercomputer can decipher a trillion bit key in one second, then a 500 trillion bit key should take it 500 seconds

I may have misunderstood quantum computing, but I believe that if it can crack a key at all, it can do it in one cycle. Is this not right? If it is, then the object is to have more bits in the key than any quantum computer has been built to handle, and you're safe until they catch up.