|To be more specific, <span class="e">E </span>uses ''object-capabilities'', a type of capability security that has strengths not found in several weaker "capability" systems, as discussed in [http://www.hpl.hp.com/techreports/2003/HPL-2003-222.html Paradigm Regained].

+

{{:Walnut/Secure_Distributed_Computing/Auditing_minChat}}

-

|}

+

{{:Walnut/Secure_Distributed_Computing/Capability_Patterns}}

-

<span class="e">E</span> uses ''capability based security'' to supply both strong security and broad flexibility without incurring performance penalties. Capabilities might be thought of as the programming equivalent of physical keys, as described with the following metaphors.

+

{{:Walnut/Secure_Distributed_Computing/Satan_at_the_Races}}

-

'''Principle of Least Authority (POLA)'''

+

<span style="color:red">Example: MarketPlace</span>

-

When you buy a gallon of milk at the local 7-11, do you hand the cashier your wallet and say, "take what you want and give me the rest back?" Of course not. In handing the cashier exact change rather than your wallet, you are using the ''Principle of Least Authority'', or POLA: you are giving the cashier no more authority than he needs. POLA is a simple, obvious, crucial best-practice for secure interactions. The only people who do not understand the importance of POLA are credit card companies (who really do tell you to give that far-off Internet site all your credit, and hand back what they don't want), and computer security gurus who tell you to use more passwords.

-

====Children with your ID badge====

-

-

Suppose all security in the physical world were based on ID badges and ID readers. At your home you might put an ID reader on your door, another on your CD cabinet, and another on your gun vault. Suppose further you had to depend on 4-year-old children to fetch your CDs for you when you were at the office. How would you do it? You would hand your ID badge to the child, and the child could then go through the front door and get into the CD cabinet. Of course, the child with your ID badge could also go into the gun vault. Most of the children would most of the time go to the CD cabinet, but once in a while one would pick up a gun, with lamentable results.

-

-

====Keys====

-

-

In the real physical world, if you had to depend on children to fetch CDs, you would not use an ID badge. Instead you would use keys. You would give the child a key to the front door, and a key to the CD cabinet. You would not give the child a key to the gun vault.

-

-

All current popular operating systems that have any security at all use the ID badge system of security. NT, Linux, and Unixshare this fundamental security flaw. None come anywhere close to enabling POLA. The programming languages we use are just as bad or worse. Java at least has a security model, but it too is based on the ID badge system--an ID badge system so difficult to understand that in practice no one uses anything except the default settings (sandbox-default with mostly-no-authority, or executing-app with total-authority).

-

-

The "children" are the applications we run. In blissful unawareness, we give our ID badges to the programs automatically when we start them. The CD cabinet is the data a particular application should work on. The gun vault is the sensitive data to which that particular application should absolutely not have access. The children that always run to get a gun are computer viruses like the Love Bug.

-

-

In computerese, ID badge readers are called "access control lists". Keys are called "capabilities". The basic idea of capability security is to bring the revolutionary concept of an ordinary door key to computing.

-

-

====Melissa====

-

-

Let us look at an example in a computing context, of how keys/capabilities would change security.

-

-

Consider the Melissa virus, now ancient but still remembered in the form of each new generation of viruses that use the same strategy the Melissa used. Melissa comes to you as an email message attachment. When you open it, it reads your address book, then sends itself - using your email system, your email address, and your good reputation - to the people listed therein. You only had to make one easy-to-make mistake to cause this sequence: you had to run the executable file found as an attachment, sent (apparently) by someone you knew well and trusted fully.

-

-

Suppose your mail system was written in a capability-secure programming language. Suppose it responded to a double-click on an attachment by trying to run the attachment as an emaker. The attachment would have to request a capability for each special power it needed. So Melissa, upon starting up, would first find itself required to ask you, "Can I read your address book?" Since you received the message from a trusted friend, perhaps you would say yes - neither Melissa nor anything else can hurt you just by reading the file. But this would be an unusual request from an email message, and should reasonably set you on guard.

-

-

Next, Melissa would have to ask you, "Can I have a direct connection to the Internet?" At this point only the most naive user would fail to realize that this email message, no matter how strong the claim that it came from a friend, is up to no good purpose. You would say "No!"

-

-

And that would be the end of Melissa, all the recent similar viruses, and all the future similar viruses yet to come. No fuss, no muss. They would never rate a mention in the news. Further discussion of locally running untrusted code as in this example can be found later under Mobile Code.

-

-

Before we get to mobile code, we first discuss securing applications in a distributed context, i.e., protecting your distributed software system from both total strangers and from questionable participants even though different parts of your program run on different machines flung widely across the Internet (or across your Intranet, as the case may be). This is the immediate topic.

-

-

====Language underpinnings for capabilities====

-

-

There are a couple of fundamental concepts that must be gotten right by a programming language in order to use capability discipline. We mention these here.

Pointer arithmetic is, to put it bluntly, a security catastrophe. Given pointer arithmetic, random modules of code can snoop vast spaces looking for interesting objects. C and C++ could never support a capability system. Java, Smalltalk, and Scheme, on the other hand, did get this part of capability discipline right.

-

-

=====Object Encapsulation=====

-

-

In a capability language you can not reach inside an object for its instance variables. Java, Smalltalk, and Scheme pass this test as well.

-

-

In JavaScript, as a counterexample, all instance variables are public. This is occasionally convenient but shatters any security hopes you might have. JavaScript is a relatively safe language only because the language as a whole is so thoroughly crippled.<font color="#000000"> We consider JavaScript ''safe'', but not ''secure''<nowiki>: We consider </nowiki>''security'' to require ''not only'' safety ''but also'' the power to get your work done: POLA means having enough authority, as well as not having too much. Using this definition of security, the Java applet sandbox is mostly safe, but not at all secure.</font> And a Java applet that has been allowed to run in a weaker security regime because the applet was "signed", is neither safe nor secure.

In a capability system, the only source of positive authority for an object should be the references that the object holds.

-

-

Java fails here, along with Smalltalk and Scheme. A famous example of the trouble that static mutable state can get you appeared in Java 1.0 (corrected in 1.1, an upward compatibility break so rarely used they could get away with it). The object System.out, to which people routinely sent print messages, was replaceable. A programmer in the middle of a bad hair day could easily replace this object, reading everything everyone else was doing, and preventing anyone else from reading their own outputs.

-

-

=====Carefully design the API so that capabilities do not leak=====

-

-

You can make everything else right, but if the APIs for a language were designed without consideration of security, the capability nature of the system is seriously flawed. Let us consider an example in Java. Suppose you had an analysis program that would present graphs based on the contents of an existing spreadsheet. The program only needs read access on the one spreadsheet file, it needs nothing else in your file system. In Java, then, we would grant the application an InputStream.

-

-

Unfortunately, an InputStream in Java leaks authority. In this example, you could "cast down" to a FileInputStream, from which you could get the File, from which you could get a WriteStream and a whole filepath, which would give you total access and control over the entire directory system.

-

-

To fix this problem for a single object in a single application, you could write your own InputStream class that doesn't leak. This strategy does not scale, however: requiring the programmer to write replacements for every object in the API to achieve security will result in few secure programs (just as requiring the programmer to write his own bitmap handlers and gui widgets will result in few point-and-click programs). To really fix this security problem in Java, you would have to rewrite the entire java.io package, wrecking backward compatibility as a side effect. Without that kind of massive, serious repair, it is always easier to create a breach than to create a secure interaction. With an infrastructure actively trying to harm you, what chance do you really have?

-

-

===<span class="e">E</span> Capabilities===

-

-

<span class="e">E</span> has no pointer arithmetic. <span class="e">''E ''</span>has no mutable statics.<span class="e">'' E ''</span>has an API carefully thought out to prevent capability leaks. This would make it a capability secure language for single-processor applications. But<span class="e">'' E ''</span>goes a step further. It takes the concept of a secure, unforgeable object reference and extends it to distributed objects:

-

-

* The communication links are encrypted. Third parties cannot get inside the connection.

-

* The objects are unfindable without a proper reference received (directly or indirectly) from the creator of the object. You must have the key to unlock the door.

-

* The objects are authenticated. No object can pretend to be the object you are trying to contact.

-

-

These aspects of<span class="e">'' E ''</span>protocol can be understood better by looking at the URI that identifies an<span class="e">'' E ''</span>object to other objects outside its own vat.

The whole <span class="e">E</span> approach to security is disorienting to anyone steeped in traditional computer security lore. On the other hand, anyone with a background in object-oriented programming will find it to be a natural extension of the OO discipline. Another entire book is needed on the philosophy of security exemplified by object capabilities and <span class="e">E</span>. [http://minnow.cc.gatech.edu/squeak/3770 The closest thing to such a Philosophy of Security known to the author is a requirements document] for the shared virtual 3D world [http://croquetproject.org/ Croquet]. Croquet needs to be both very user friendly and very secure, and so requires the kind of seriousness that object-capabilities enable. <br /><br /> A brutally abbreviated list of points from that document includes: <br /><br />'''No Passwords, No Certificates, No Certificate Authorities, No Firewalls, No Global Namespaces.''' POLA, pet names, and other related techniques described here supplant them all.<br /><br />'''Minimize Authentication. Focus on Authorization.''' Authentication-laden systems are as user friendly as post-9/11 airport security systems. And they work about that well, too. The terrorists are far more likely to have their identification in order than the rest of us. <br /><br />'''Do Not Prohibit What You Cannot Prevent.''' The main purpose of this advice is to prevent you from looking like a fool after you've been circumvented.<br /><br />'''Embrace Delegation.''' People must delegate to succeed. So they will delegate despite your most ridiculous efforts. Relax, enjoy it. Help them. And while relaxing, re-read the earlier, "do not prohibit what you cannot prevent."<br /><br />'''Bundle Authority with Designation.''' This makes security user-friendly by making the security invisible. Only possible after you stop wasting your effort on authentication all the time.

-

|}

-

-

====Security as an inexpensive lunch====

-

-

There is no such thing as a free lunch, but this does not rule out the possibility of lunches at bargain prices. When programming in<span class="e">'' E''</span>, you are automatically working in a capability secure environment. All references are secure references. All powers are accessible only through capabilities. Making an<span class="e">'' E ''</span>program secure is largely a matter of thinking about the architecture before you code, and doing a security audit after you code. When designing and auditing you use a small number of principles for scrutinizing the design:

-

-

====Principle of Least Authority (POLA) for Computer Programs====

-

-

The Principle of Least Authority (known henceforth as POLA), which has been used by human beings instinctively for thousands of years, translates to computer programs fairly straightforwardly: ''never give an object more authority than it needs''. In particular, if an object may be running on a remote untrusted machine, think very carefully about the minimum set of authorities it needs, and give it capabilities only on ''facets'' (described later) that ensure it gets no more. A simple word processor needs read and write access to the one file it is being used to edit, and it needs read-only access on all the system fonts. It does not need any access to anything else. Do not give it access to anything else. If it asks for something else, it is lying to you, and you should not trust it.

-

-

====Principle of Hardware Software Ownership====

-

-

When developing software, remember that the person who controls the hardware always, at the end of the day, controls the software. Hence, if you send someone a piece of a distributed program to run on their own equipment, that person totally and utterly owns everything that resides on his machine. They can modify the code you gave them, or rewrite it from scratch yet make it look from the outside like it is identical. You must therefore think carefully about what features of your system you really trust on that remote machine. A key feature of E that enhances its reliability is that objects which are manufactured in a particular vat remain resident in that vat, so that the objects remain as reliable as the objectMakers used to produce them. Only ''transparent immutables'' (immutables that don't encapsulate anything) actually move across computational boundaries.

-

-

Many people have made the error of believing this principle of hardware ownership can be circumvented. At the time of this writing, the music recording industry is throwing away truly fabulous sums of money on schemes that assume they can somehow control data after it has arrived on a user's own hardware. Microsoft is busily developing Palladium (uh, I mean, NGSCB). Intel is busily developing TCP (uh, I think they changed the name to La Grande). Their fate has already been foretold in the fate of the popular game Diablo I: authoritative representations of data were allowed to reside on user computers, assuming that the object code was too difficult to understand to be hacked and that the Diablo client code would always behave as the designers intended. They were 99% correct, which in the computer security field means they were 100% wrong. Today, 99% of the people who hack Diablo I don't understand the object code. But somewhere some single individual figured it out and posted the result on the Web. Now your grandmother can sabotage shared Diablo I games as quickly and easily as the most accomplished hacker in history. For Diablo II, the developers had learned the lesson. Authoritative information is stored only on the server, giving them the beginnings of true security.

-

-

Not only does hardware own the software, so too does the underlying operating system. As we have stated repeatedly here, <span class="e">''E''</span> enables the construction of extremely secure computing systems. However, <span class="e">E</span> systems can still be attacked from below, by viruses granted authority by the operating system underneath the <span class="e">''E''</span> world. Such attacks can only be prevented by completing the security regime, either with capability-based operating systems, or with computers on which only capability-secure programs are allowed. There is one open-source capability-based OS under development, at [http://www.eros-os.org www.eros-os.org].

-

-

====Denial Of Service Attacks====

-

-

One form of attack that even <span class="e">''E''</span> cannot defend against is denial of service (DOS). In a denial of service, someone simply swamps your system with requests, making your system unable to operate. Such attacks take many forms, but the key limitation in such attacks is this: such an attack can never steal your secret data or gain control of your sensitive operations. If DOS were the only kind of attack possible, the world would be a vastly better place. Indeed, if only DOS attacks were possible, even most modern DOS attacks would fail, because the serious DOS attacks require that the attacker first gain control of hundreds of computers that belong to other people, by using attacks far more invasive and damaging than DOS itself.

-

-

===Example: Auditing minChat===

-

-

The minChat application presented earlier as the example for a distributed system was designed with no consideration at all for security issues. This is literally true: minChat is derived from ''eChat'', which was written by the author as his first practice exercise to learn <span class="e">''E''</span>. However, because eChat was written in a capability secure environment, because the author used a fairly clean modular architecture, and because clean architecture in a capability-secure infrastructure makes its own luck, there were no serious security breaches in eChat. And, as we shall see, there are no serious security breaches in minChat, though there are many interesting lessons we can learn from it.

-

-

First of all, in minChat as in all <span class="e">''E''</span> programs, the communication is all encrypted and the objects are all unfindable and unguessable. Therefore no third party can enter the conversation to either eavesdrop or forge messages. The only source of security breach will be the person at the other end of the chat system, i.e., the person with whom you planned to chat. So, right off the bat, simply by using <span class="e">''E''</span> you have eliminated ''outsider'' attacks. But insider attacks are still possible, and indeed constitute the most serious threat to most security systems anyway. You have to be able to achieve ''cooperation in the presence of limited trust''. Cooperation despite limited trust is something we humans achieve every day, for example when paying 3 bucks for a gallon of milk at the local QwikMart. But it is notoriously difficult to get these transactions working correctly on computer systems when using conventional languages. So we will examine minChat very closely to ensure that the person we want to chat with does not also get any inappropriate powers, like the ability to delete all our files.

-

-

Now, let us begin with a quick review of minChat's code to see what parts of the system we need to examine to do a security review. As you may recall from the chapter on Ordinary Computing, emakers come into existence with no authority whatsoever, and receive authority only via the arguments that are passed into their objects. In larger programs, one can often ascertain that the emaker receives so little authority it can be disregarded from a security perspective: if it doesn't have enough authority, it just can't be a danger (as documented in the [http://www.combex.com/papers/darpa-report/index.html DarpaBrowser Final Report and Security Review]).

-

-

MinChat is just 2 pages of code. It does not use emakers, so this does not help in our current situation. However, what does help is that the only object accessible to the outside world is the chatController, whose reference is sent to the friend (or, in this case, perhaps the enemy) with whom we wish to chat (hey, we need to talk to our enemies too, from time to time). What unpleasantness can our friend do on our computer through the chatController interface he receives? We will reproduce the crucial lines of code here for convenience:

-

-

<pre>

-

<nowiki>pragma.syntax('0.9')</nowiki>

-

to send(message) {

-

when (friend<-receive(message)) -> {

-

chatUI.showMessage("self", message)

-

} catch prob {chatUI.showMessage("system", "connection lost")}

-

}

-

to receive(message) {chatUI.showMessage("friend", message)}

-

to receiveFriend(friendRcvr) {

-

bind friend := friendRcvr

-

chatUI.showMessage("system", "friend has arrived")

-

}

-

to save(file) {file.setText(makeURIFromObject(chatController))}

-

to load(file) {

-

bind friend := getObjectFromURI(file.getText())

-

friend <- receiveFriend(chatController)

-

}

-

-

</pre>

-

-

There are only 5 messages here. Let's go through what our friend could do with each one of them in sequence.

-

-

* '''send(message):''' This is the method that sends a message to our friend. If our friend wants to send messages to himself, which we can ourselves read in our own chat window, this is amusing, but not very exciting as an attack.

-

* '''receive(message):''' This is one of the 2 messages our friend is supposed to use, to send us a message. The message gets placed in the chatArea text pane. Since we are receiving data here, even a cracker of modest powers would look at this and leap quickly to the conclusion that there is an opportunity here for a buffer overflow attack--the type of attack that, at the time of this writing, gets the most news media headlines because Windows is so richly riddled with such exploits. <br /><br /> Memory safe languages including <span class="e">E</span> are not in general susceptible to buffer overflow attacks: if you overrun the end of a data structure of fixed size, you get an exception, not a data write into random memory. Alas, this is not the end of the story. The text areas in the widget toolkit are native widgets. Eventually, we send this text string of unbound length to native code, written in an unsafe language in an unsafe way with unsafe quantities of authority. So buffer overflow attacks, and other attacks based on the strategy of sending a stream of data that will surprise a decoding algorithm into unexpected behavior, are possible in theory. This will remain a risk until all the widgets in our toolkits are also written in object-capability languages.<br /><br /> The only good news here is that the text widget uses a very simple data type, and has been attacked by accident by so many millions of programmers over the years that it has had to be fixed just so the system doesn't crash if you accidentally drag/drop a DLL file onto Notepad. So even under Windows, the text widget is robust against strange data. But this is an area of risk of which we must always be aware when using components written outside the capability paradigm. Text widgets are pretty safe, gif and bmp widgets are probably safe, but by the time you encounter a widget using manically optimized C to decode mpeg-4, you're looking at a widget that is complicated enough that it is probably vulnerable.

-

* '''receiveFriend(friendRcvr):''' Our friend could specify someone who is not himself as the person we should send our messages to. Of course, our friend could write a slightly different version of minChat that does the forwarding to everyone he wants to see it, automatically. Or he could simply delegate the capability reference to someone else (i.e., hand a copy of it to another party). So this not very exciting either.

-

* '''save(file):''' Now this is a very interesting method for attack. The opportunity for the friend to save data on our computer is an interesting one, and certainly outside the domain of what we intended for simple chatting. There's a little problem, however, from the attacker's point of view. The save method is receiving, as a parameter, a capability (i.e., a reference) to a local file. This capability is not forgeable by the friend unless he has access via other means to our file system, in which case we already granted him such power and are assuming he won't abuse it (if he has such power, he can abuse it more effectively by other means, abusing it by sending to minChat is silly). The upshot is, the worst our malicious friend can do is send us an object that is not actually a file object, which will not respond properly to messages being send under the assumption that the receiver is a near file object, and cause a "no such method" exception which will crash our session. So, our friend can crash minChat, but can't do any serious harm: an attack whose only result is to cut off the target from the attacker's attacks hardly constitutes a winning gambit.

-

* '''load(file):''' Not only is loading a file less interesting than writing, or corrupting, one, but worse, this method once again expects a local file object to work with. Again, the worst case is that the attacker can cause a session crash. An interesting variant of the attack might be to try to ship an object that looks and feels like a file for minChat to play with. This would fail directly because the load operation is filling a def'd variable that is already bound, resulting in a thrown exception. But let us assume for a moment that we made the variable "friend" a var, rather than a def that was later bound. The result in this case, if successful, would simply be to redirect the minChat client to a different person for further communication (since what is being loaded is the reference to the friend). Still no joy for the attacker.

-

-

The upshot is, even the friend who has received the capability to talk to our chat tool doesn't get much traction. He can annoy us (by crashing minChat), but that is pretty much the only interesting thing he can do....or so it seems on first review.

-

-

Let's see if there is anything exotic an attacker could do by combining several of these unimportant attacks. First let us make the problem more concrete. Suppose Bob has a crush on Alice, who is dating Ted. Bob decides to make Alice think that Ted doesn't really like her. So he gives Alice his chat reference to Ted, and tells her that, when using this reference, Ted will think she is really Bob, and so Alice will be able to get Ted to "speak candidly" (to Bob) about her. At this point, Bob uses the "send" method on Ted's computer to send messages that Alice will read, messages that look to Alice like Ted is sending them! Bob sends cruel jokes about Alice, and Alice breaks up with Ted.

-

-

Hmmm....it looks pretty bad for Ted. Did we just find a really cool attack?

-

-

Well, sort of. First there is the risk that Ted will notice that his chat tool is, without any help from the owner sending offensive messages to Bob. Ted might then wonder what is going on. But more fundamentally, Bob has a much simpler scheme for messing with Alice's headspace, if Alice is open to this type of attack.

-

-

Far simpler for Bob would be to create 2 brand new chat capabilities, one that is Bob's FakeTed, and the other is Bob's FakeBob. He has Alice start up as FakeBob, and Bob himself starts up FakeTed. Now Ted is completely out of the loop, Bob can send any horrid thing to Alice he wants to send, without interference.

-

-

So is this a fundamental flaw with the entire <span class="e">''E''</span> security approach? Not exactly. This attack can be made regardless of the technology being used -- Bob could have created 2 Yahoo Instant Messenger accounts and played the same game. Indeed, variations on this attack were used repeatedly by William Shakespeare throughout his career to create both high tragedy and low comedy. The fundamental problem comes when you trust a middleman as the sole validation of the authenticity of messages coming from another person. This is a human problem, not a technology problem. The best source of contact information for a person is the person him/herself; trust a third party at your own risk. Often the risk makes sense, but don't ever forget there is a risk there, whether you are using capability security or just a parchment and quill.

-

-

We have come full circle to the conclusion that minChat doesn't have any serious security breaches. Nonetheless, there are a couple of annoying things Bob may do to Ted. These annoyances can be traced directly to the presence of five messages in the chatController: there are only two messages out of the five that the friend is "supposed" to use. Even forgetting about security reviews for a moment, it is clear that, if the friend is only supposed to have two methods, he should not have five methods at his disposal. This make sense from a simple modular design point of view, it limits the number of mistakes you can make. For the security reviewers, this is an enormous win, because it reduces the number of methods to examine by 60%. This is a substantial gain, given how circuitous and complicated security analysis can become.

-

-

So how do we cut the number of methods available to the friend? By using a ''facet'' of the chatController that only has the appropriate methods:

-

-

<pre>

-

-

<nowiki># E syntax

-

# Place the following object above the binding of the chatController

-

def chatFacet {

-

to receive(message) {chatController.receive(message)}

-

to receiveFriend(friendRcvr) {chatController.receiveFriend(friendRcvr)}

-

}

-

-

# Now modify the chatController's save method to save the chatFacet, not the chatController:

-

to save(file) {file.setText(makeURIFromObject(chatFacet))}

-

-

# and similarly modify the load method to send the chatFacet not the chatController:

-

to load(file) {

-

bind friend := getObjectFromURI(file.getText())

-

friend <- receiveFriend(chatFacet)

-

}

-

</nowiki>

-

-

</pre>

-

-

This version of minChat is more secure (only a little more secure, though, since it was pretty secure already), and vastly easier to review.

There are still a number of interesting lessons we can learn here by considering some variations on minChat. The first variation is based on the old adage, "You can write FORTRAN in any language." This is true in <span class="e">''E''</span> as well, but the implications can be grievous.

-

-

Let's go back the minChat without the chatFacet, and consider the following small change to the program. Suppose that the ''save'' method in the chatController, instead of receiving a file object from the chatUI and creating the uri string itself, received the file path and the uri string from chatUI:

This is a fairly FORTRAN-ish way of accomplishing the goal: since FORTRAN doesn't have objects, it may seem perfectly obvious and reasonable to just send the path to the file and let the chatController figure out how to write to that file on its own.

-

-

However, in this situation, it is a disaster. Without the chatFacet, now the friend at the far end has a method he can call that allows him to write any data he wants, into any location the user can reach on his own computer:

-

-

<pre>

-

-

<nowiki># on the friend's machine

-

# put the code for the SubSevenServer Trojan in a string

-

def code :String := "@#$!...and so on, the character representations of the code bytes"

-

# The friend's friend is our poor user

-

# Put the trojan horse in the user's startup profile

-

friend <- save("~/startup/sub7.exe", code)

-

</nowiki>

-

-

</pre>

-

-

Now we're having some fun! Every time the user logs onto his computer, the friend gets full control of all his resources.

-

-

What went wrong here? There are several ways of slicing the pie and pinning different pieces of the code with blame. One we have already identified: we failed to follow POLA in allowing the friend to call the ''save'' method in the first place. But another one is just as crucial: we failed to follow POLA twice with this revised version of the ''save'' method that gets 2 strings.

-

-

We failed first by sending the save method an insufficient amount of authority (remember, Least Authority also means Adequate Authority). We did this by sending the chatController a string ''describing'' the file's path rather than just sending him the file itself. As a consequence, the chatController had to use special powers gotten from elsewhere to translate the description into a file object.

-

-

Once the chatController had to do this translation, it became vulnerable to the [http://www.cis.upenn.edu/~KeyKOS/ConfusedDeputy.html Confused Deputy Attack], a classic in security literature. The Confused Deputy attack is the aikido move of computer security: the attacker persuades the target to use the target's own authority against itself. Confused Deputy attacks are very hard to pull off as long as a single reference serves as both the designation of the object and the authority to use the object (as is always the case with the objects in a capability-based programming language). But when the authority to use the object is separated from the designation, as it is when a path string is being used to describe the file, you are in deep trouble.

-

-

Simply sending the file object rather than the file description solves this problem. The friend at the far end has no way to send an object that will be interpreted by the ''save'' method as a local file, and so the friend gets no traction whatsoever as long as the ''save'' method is expecting the object not the description. This leads to the general rule:

-

-

When you must read in the descriptions of objects rather than the objects themselves, translate them into capability secure objects at the ''earliest possible moment.'' When you must write out descriptions of objects rather than the objects themselves, postpone the translation of the object into a description until the ''last possible moment''.

-

-

This rule is not merely good for secure, capability-oriented programming. Rather, it is good for all object-oriented programming. Manipulating the descriptions of objects is much less reliable, and much more error prone, than direct manipulation of the object itself--which is both why crackers like it, and object oriented designers avoid it. This exemplifies an interesting truth: capability oriented development is often just object oriented development taken seriously.

-

-

Working directly with the object, not a description, is a rule the author learned the hard way. A security review team ripped numerous holes in a single piece of the author's software, simply because the author foolishly worked with descriptions rather than with objects. The story is told in excruciating detail in the [http://www.combex.com/papers/darpa-review/ DarpaBrowser Security Review]. If you are one of the rare and lucky individuals who are able to learn from other people's mistakes, rather than having to make the mistakes firsthand yourself first before you really get it, this is a magnificent lesson to learn secondhand.

-

-

====Defense In Depth versus Eggshell Security====

-

-

In large software systems, most objects are constructed inside emakers, rather than in the main <span class="e">''E''</span> program body. Consequently they come to life with severely restricted, POLA-oriented authority(i.e., only the authority we send to them when we construct them, which tends to be what they need but not anything else). Let us emulate this characteristic of large systems in our little minChat by making a paranoid version of the chatFacet:

-

-

<span style="color:red">This is grossly broken syntax for eval</span>

-

-

<pre>

-

-

<nowiki># E syntax

-

def makeFacet := e`

-

def makeFacet(chatController) {

-

def chatFacet {

-

to receive(message) {chatController.receive(message)}

-

to receiveFriend(friendRcvr) {chatController.receiveFriend(friendRcvr)}

-

}

-

return chatFacet

-

}

-

`.eval(universalScope)

-

def chatFacet := makeFacet(chatController)

-

</nowiki>

-

-

</pre>

-

-

The universalScope eval method creates the makeFacet function in strict confinement, identical to the confinement imposed on emakers. Consequently, the only authority the chatFacet has is the authority handed in, namely, the authority to send messages to the chatController.

-

-

Now let's assume we are running a more complex program, with a more complex component, i.e., a component that is not so simple we can just look at it and see it doesn't do anything very exciting. Now let's further assume that by some extraordinary bit of legerdemaine the attacker succeeds in completely subverting the component (the chatFacet in our little example). How terrible is the disaster?

-

-

As we have already shown, there is no disaster. Even if the attacker gets direct access to the chatController, there still isn't anything interesting he can do. We have achieved ''defense in depth''. A typical penetration of our system gives the attacker only a limited access to other objects, which in turn must be individually penetrated.

-

-

Compare this to the situation now typical in software engineering. Even with the most modern "secure" languages like Java and C#, an attack can acquire the full authority of the program by subverting even the most inconsequential object. With tools like access control lists and firewalls, we engage in "perimeter defense", which is more correctly described as "eggshell defense". It is like an eggshell for the following reason: while an eggshell may seem pretty tough when you tap on it, if you can get a single pinhole anywhere in the surface, you can suck out the entire yoke. No wonder cybercrackers laugh at our silly efforts to defend ourselves. We have thrown away most of our chances to defend ourselves before the battle even begins.

-

-

===Capability Patterns===

-

-

====Facets====

-

-

As discussed in the minChat security review, facets are objects that act as intermediaries between powerful objects and users that do not need (and should not be granted) its full power. We saw a facet in use in the audited version of the eChat program, where the chatFacet. was interposed between the chatController and the remote chat participant. Here is a little general-purpose facet maker:

-

-

<pre>

-

-

<nowiki># E sample

-

/**

-

* <param> target is the underlying powerful object

-

* <param> allowedMethods is a map. The key is a method name, the value

-

* is the set of allowed numbers of arguments. So ["receive" =>[0, 2].asSet()]

-

* would allow the receive method, with either 0 or 2 arguments, to be forwarded.

-

**/

-

def makeFacet(target, allowedMethods) {

-

def filteringFacet {

-

match [verb, args] {

-

if (allowedMethods.maps(verb) &&

-

allowedMethods[verb].contains(args.size())) {

-

return E.call(target, verb, args)

-

}

-

}

-

}

-

return filteringFacet

-

}

-

def chatController

-

def chatFacet := makeFacet(chatController, ["receive"=>[1].asSet(),

-

"receiveFriend"=>[1].asSet()])

-

</nowiki>

-

-

</pre>

-

-

Facets of this type can be retrofitted onto an existing system. We did this with very little effort for minChat with the chatFacet, but the technique works for far more complicated problems as well. The capability-secure windowing toolkits that <span class="e">''E''</span> has placed on top of Swing and SWT uses facetization as the main tool.

-

-

Facets can be made much more sophisticated in their restrictions on access to their underlying object. You can make a facet that logs requests and sends email when certain methods are called. In an SEC-regulated stock exchange facet, you might wish to grant the capability to make a trade only from 9AM to 3PM excluding weekends and holidays.

-

-

One interesting example is the use-once facet, which allows the holder of the facet to use the facet only one time. For example, this version of a chatReceiver only allows a single message to be sent:

-

-

<pre>

-

-

<nowiki># E sample

-

def onceOnlyReceiver(baseChatController) {

-

var chatController := baseChatController

-

def onceOnlyReceiver {

-

to receive(text) {

-

chatController.receive(text)

-

chatController := null

-

}

-

}

-

}</nowiki>

-

-

</pre>

-

-

This version will throw an exception back to the sender of a second message.

-

-

It can be tempting in a facet to suppress a couple of powerful methods in the underlying powerful object and delegate the rest.

-

-

<pre>

-

-

<nowiki># E sample

-

def powerfulObject {

-

to doPowerfulOperation() {

-

#do powerful operation

-

}

-

to doWeak1() {}

-

to doWeak2() {}

-

to doWeak3() {}

-

#....

-

to doWeak99() {}

-

}

-

def badFacet extends powerfulObject {

-

to doPowerfulOperation() {

-

#do nothing, no forwarding for the powerful operation, but forward everything else

-

}

-

} </nowiki>

-

-

</pre>

-

-

''Avoid this''. For a facet to remain secure during maintenance, it should never just delegate by default. If a new method is added to a powerful object (in this example, suppose powerfulObject is updated with a new method, doPowerful2()), it should not be exposed through the facet by default: rather, the facet must by default not expose it.

-

-

This risk can be exhausting to avoid, but it is always dangerous to accept. The first version of the capability windowing toolkit on Swing suppressed the handful of dangerous methods in the Java version 1.3 of Swing rather than explicitly allowing the safe methods, of which there were thousands. Within 30 days, the entire system was broken: the Java 1.4 Beta included hundreds of new, security-breaking methods. The only saving grace was that we had always known that the first version, thrown together in haste on weekends, was just a proof-of-principle and would have to be replaced. We just hadn't appreciated how soon replacement would be required.

-

-

====Revocable Capabilities====

-

-

If you wish to give someone restricted access to an object with facets, it is quite likely you will want to revoke access at some point as well.

-

-

The simplest way of making a capability revocable is to use a transparent forwarder that includes a revocation method:

-

-

<pre>

-

-

<nowiki># E sample

-

def revocableCapabilityMaker(baseCapableObject) {

-

var capableObject := baseCapableObject

-

def forwarder {

-

to revoke() {capableObject := null}

-

match [verb, args] {E.call(capableObject, verb, args)}

-

}

-

return forwarder

-

}</nowiki>

-

-

</pre>

-

-

Note that, even though the forwarder is nominally delegating all method invocations (except revoke()) to the capableObject, we cannot use the ''extends'' keyword to capture the behavior. "Extends" creates an immutable reference, so the fact that the capableObject is a var wouldn't allow you to revoke the delegating behavior. Instead we use the match[verb, args] pattern.

-

-

Capability revocation, like facet forwarding, can be based on complex sets of conditions: revoke after a certain number of uses, revoke after a certain number of days. This example uses a simple manual revocation. Indeed, this version is too simple to work reliably during system maintenance and upgrade, and a more sophisticated pattern is generally recommended:

In this pattern, the authority of the object and the authority to revoke are separated: you can hand the power to revoke to an object you would not trust with the authority itself. Also, the separate revoker can itself be made revocable.

-

-

====Sealers and Unsealers====

-

-

A sealer/unsealer pair makes it possible to use untrusted intermediaries to pass objects safely. Use the sealer to make a sealed box that can only be opened by the matching unsealer. <span class="e">E</span> has built-in support for sealer/unsealer pairs:

-

-

<pre>

-

-

<nowiki>? def makeBrandPair := <elib:sealing.makeBrand>

-

# value: <makeBrand>

-

-

? def [sealer, unsealer] := makeBrandPair("BrandNickName")

-

# value: [<BrandNickName sealer>, <BrandNickName unsealer>]

-

-

? def sealedBox := sealer.seal("secret data")

-

# value: <sealed by BrandNickName>

-

-

? unsealer.unseal(sealedBox)

-

# value: "secret data"

-

</nowiki>

-

-

</pre>

-

-

If you hold the unsealer private, and give the sealer away publicly, everyone can send messages that only you can read. If you hold the sealer private, but give the unsealer away publicly, then you can send messages that recipients know you created, i.e., you can use it as a signature. If you are thinking that this is much like a public/private key pair from public key cryptography, you are correct, though in <span class="e">E</span> no actual encryption is required if the sender, recipient, brand maker, and sent object all reside in the same vat.

-

-

While the Brand is built-in, it is possible to construct a sealer/unsealer pair maker in E without special privileges. The "shared variable" technique for making the maker is interesting, and because the same pattern appears in other places (such as the Notary/Inspector, coming up next), we demonstrate it here:

-

-

<pre>

-

-

<nowiki># E sample

-

def makeBrandPair(nickname) {

-

def noObject{}

-

var shared := noObject

-

def makeSealedBox(obj) {

-

def box {

-

to shareContent() {shared := obj}

-

}

-

return box

-

}

-

def sealer {

-

to seal(obj) {return makeSealedBox(obj)}

-

}

-

def unsealer {

-

to unseal(box) {

-

shared := noObject

-

box.shareContent()

-

if (shared == noObject) {throw("invalid box")}

-

def contents := shared

-

shared := noObject

-

return contents

-

}

-

}

-

return [sealer, unsealer]

-

}

-

</nowiki>

-

-

</pre>

-

-

The variable "shared" normally contains the value "noObject", which is private to a particular sealer/unsealer pair and so could never the the actual value that someone wanted to pass in a sealed box. The unsealer tells the box to shareContent, which puts the content into the shared variable, from which the unsealer then extracts the value for the invoker of the unseal method. In a conventional language that used threads for concurrency control, this pattern would be a disaster: different unseal requests could rumble through, overwriting each others' shared content. But by exploiting the atomicity of operational sequences enabled by promise pipelining, this peculiar pattern becomes a clean solution for this, and several other security-related operations.

-

-

====Vouching with Notary/Inspector====

-

-

Suppose Bob is the salesman for Widget Inc. He persuades Alice to buy a widget. Bob hands Alice a Widget Order Form with a money-receiving capability. It is important to Bob that Alice use the form he gives her, because this particular form (which Bob got from Widget Inc.) remembers that Bob is the salesman who should get the commission. It is important to Alice that she know for sure that, even though she got the order-form from Bob, this is really a Widget Inc. order form, and not something Bob whipped up that will transfer her money directly to his own account. In this case, Alice wants to have Widget Inc. vouch for the order-form she received from Bob. She does this using an Inspector that she gets directly from Widget Inc. The Inspector is the public part of a notary/inspector pair of objects that provide verification of the originator of an object. To be vouchable, the orderForm must implement the startVouch method as shown here:

-

-

<pre>

-

-

<nowiki># E sample

-

####### Widget Inc. software #####

-

-

#vouching system

-

#returns a private notary that offers a public inspector

-

#throws problem if the object being vouched is not vouchable

-

def makeNotary() {

-

def nonObject {}

-

def unvouchedException(obj) {throw(`Object not vouchable: $obj`)}

-

var vouchableObject := nonObject

-

def inspector {

-

to vouch(obj) {

-

vouchableObject := nonObject

-

try {

-

obj.startVouch()

-

if (vouchableObject == nonObject) {

-

return unvouchedException(obj)

-

} else {

-

def vouchedObject := vouchableObject

-

vouchableObject := nonObject

-

return vouchedObject

-

}

-

} catch err {unvouchedException(obj)}

-

}

-

}

-

def notary {

-

to startVouch(obj) { vouchableObject := obj}

-

to getInspector() {return inspector}

-

}

-

return notary

-

}

-

-

#create Widget Inc's notary

-

def widgetNotary := makeNotary()

-

-

#Order form maker

-

def makeOrderForm(salesPerson) {

-

def orderForm {

-

# .... methods for implementing orderForm

-

to startVouch() {widgetNotary.startVouch(orderForm)}

-

}

-

return orderForm

-

}

-

-

#publicly available inspector object

-

#(accessible through a uri posted on Widget Inc's web site)

-

def WidgetInspectionService {

-

to getInspector() {return widgetNotary.getInspector()}

-

}

-

-

##### bob software #####

-

-

#scaffold for sample

-

def getOrderFormFromBob() {return makeOrderForm("scaffold")}

-

-

########## Alice's software to vouch for the order form she received from Bob #####

-

-

def untrustedOrderForm := getOrderFormFromBob()

-

def inspector := WidgetInspectionService.getInspector()

-

def trustedOrderForm := inspector.vouch(untrustedOrderForm)</nowiki>

-

-

</pre>

-

-

====Proof of Purchase====

-

-

"Proof of Purchase" is the simplest of a series of capability patterns in which the goal is not to transfer an authority, but rather to show someone that you have the authority so that they can comfortably proceed to use the authority on your behalf. In Proof of Purchase, the requester of a service is demonstrating to the server that the client already has capability that the server is being asked to use. Unlike the more sophisticated upcoming Claim Check patterns, the proof of purchase client is not concerned about giving the server an excess authority.

-

-

An interesting example of Proof of Purchase is [http://www.erights.org/javadoc/java/awt/Component.html Component.transferFocus(fromComponentsList, toComponent)] found in the tamed Swing package. In standard Swing, the holder of any panel reference can steal the focus (with component.requestFocus()). Hence any keystrokes, including passwords, can be swiped from wherever the focus happens to be by sneaking in a focus change. This is a clear violation of POLA. You should be able to transfer focus from one panel to another panel ''only if you have references to both panels''. If you have a reference to the panel that currently has the focus, then you can never steal the focus from anyone but yourself.

-

-

A natural-seeming replacement for component.requestFocus() would be requestFocus(currentFocusHolder). It was decided during the taming of Swing, however, that this would be breach-prone. If Alice transferred to Bob, not a panel itself, but rather a simple transparent forwarder on that panel, Alice could collect authorities to other panels whenever Bob changed focus.

-

-

Since we had a convenient trusted third party available that already had authority over all the panels anyway (the javax.swing.Component class object), we used the "proof of purchase" pattern instead. The globally available Component.transferFocus method accepts two arguments:

-

-

* a list of panels that the client believes to contain, somewhere in their subpanel hierarchies, the focus

-

* the target panel that should receive the focus

-

-

In the common case where a whole window lies inside a single trust realm, the client can simply present a reference to the whole window and be confident that the focus will be transferred.

-

-

The interesting thing about Component.transferFocus is that the recipient does not actually need the client's authority to reach the component that already has the focus. The Component class already has that authority. The client must send the authority merely to prove he has it, too.

-

-

====Basic Claim Check====

-

-

A common activity that brings security concerns into sharp focus is the delicate dance we perform when we use a car valet service. In this situation, we are handing the authority over our most precious and costly possession to a teenager with no more sense of responsibility than a stray cat. The whole valet system is a toothsome exercise in POLA.

-

-

In this example, we focus on the sequence of events and trust relationships involved in ''reclaiming'' our car at the end of the evening. The participants include the car owner, the valet service itself, and the random new attendant who is now on duty.

-

-

We have already chosen, for better or for worse, to trust the valet service with a key to the car. As the random new attendant comes running up to us, we are reluctant to hand over yet another key to the vehicle. After all, this new attendant might actually be an imposter, eager to grab that new Ferrari and cruise out over the desert. And besides, we already handed the valet service a key to the car. They should be able to use the key they already have, right?

-

-

Meanwhile, the attendant also has a trust problem. It would be a career catastrophe to serve up the Ferrari from his parking lot if I actually own the dented 23-year-old Chevy Nova sitting next to it.

-

-

In the physical world, we use a ''claim check'' to solve the problem. We present to the attendant, not a second set of car keys, but rather, a proof that we have the authority that we are asking the attendant to use on our behalf.

-

-

<pre>

-

-

<nowiki># E sample

-

def claimMgr := {

-

def carKeys := [].asMap().diverge()

-

def claimMgr {

-

to makeClaim(car) :any {

-

# the claim object has no interesting

-

# properties except it has

-

# a unique unforgeable identity

-

def claim {}

-

carKeys[claim] := car

-

return claim

-

}

-

to reclaim(claim) :any {

-

def car := carKeys[claim]

-

carKeys.removeKey(claim)

-

return car

-

}

-

}

-

}

-

-

def attendant {

-

#return a claim check when car is parked

-

to parkCar(car) :any {

-

# ...code to park the car...

-

return claimMgr.makeClaim(car)

-

}

-

to retrieveCar(claim) :void {

-

def car := claimMgr.reclaim(claim)

-

# ...code to retrieve car...

-

# no need to return the car reference, the owner of the claim

-

# presumably already has such a reference

-

}

-

}

-

-

def car {}

-

def carOwner := {

-

var claim := null

-

def carOwner {

-

to letAttendantParkCar() :void {

-

claim := attendant.parkCar(car)

-

println(claim)

-

}

-

to getCarFromAttendant() :void {

-

println(attendant.retrieveCar(claim))

-

}

-

}

-

}

-

-

carOwner.letAttendantParkCar()

-

carOwner.getCarFromAttendant()

-

</nowiki>

-

-

</pre>

-

-

====Basic Claim Check Default Behavior====

-

-

Suppose the owner of the car loses the claim check. He can still prove he owns the car by presenting another key. In software, this situation would be comparable to the situation in which the owner hands the attendant a direct reference rather than handing the attendant merely the claim check. The car owner has violated his own security, but it is hard to visualize situations in which this is not a reasonable argument to pass to express his desire. We can cover this case with a small modification to the claimMgr's reclaim method:

-

-

<pre>

-

-

to reclaim(claim) :any {

-

<font color="#0000FF">''try {''</font>

-

def car := carKeys[claim]

-

carKeys.removeKey(claim)

-

return car

-

<font color="#0000FF"> ''} catch prob {

-

if (carKeys.contains(claim)) {

-

return claim

-

} else {throw(prob)}

-

}''</font>

-

}

-

-

</pre>

-

-

What should the claimMgr return if the carOwner hands in a car reference for reclaim, if the car is not actually in the claimMgr's parking lot? The answer depends on the application, but such a situation violates the expectations of all the participants sufficiently that throwing an exception seems the safest choice.

-

-

====NonTransferable Claim Check====

-

-

We are far from done with claim checks. A careful car owner will not actually hand his claim check to the parking lot attendant. Rather, he will merely show the claim check to the attendant. After all, if you hand the claim check over to an imposter, the imposter can turn around and hand the claim check to a real attendant, pretending that he is the owner of the car.

-

-

The problem is somewhat different in cyberspace. The good news is, in cyberspace the attendant doesn't return a reference to the car just because you hand him the claim check. Instead, he merely performs the action you ask of him with the car. This set of actions is limited by the attendant's willing behavior. So the claim check is not as powerful a capability as the car itself. But it can still be a powerful capability. And the bad news is, in cyberspace, you can't just "show" it to the attendant. You have to give it to him.

-

-

Here we demonstrate a "nontransferable" claim check. This claim check can be handed out at random to a thousand car thieves, and it does them no good. The trick is, before handing over the claim check, the owner inserts into the claim check a reference to the individual he is treating as an attendant. The ClaimMgr compares the person to whom the owner handed the claim, with the attendant who eventually hands the claim to the claimMgr . If these two people are the same, then the owner handed the claim directly to the attendant. Otherwise, there was an intermediary party, and the request should not be honored.

-

-

<pre>

-

-

<nowiki># E sample

-

def makeBrandPair := <elib:sealing.makeBrand>

-

def valetService := {

-

def [claimSealer, claimUnsealer] := makeBrandPair("claim")

-

def claimMgr {

-

to makeClaim(car) :any {

-

def transferableClaim {

-

to makeNontransferableClaim(intendedRecipient) :any {

-

return claimSealer.seal([car, intendedRecipient])

-

}

-

}

-

return transferableClaim

-

}

-

to reclaim(claim, actualRecipient) :any {

-

def [car, intendedRecipient] := claimUnsealer.unseal(claim)

-

if (actualRecipient == intendedRecipient) {

-

return car

-

} else {throw("claim not transferable, invalid attendant")}

-

}

-

}

-

def valetService {

-

to authorizeAttendant(attendant) :void {

-

attendant.setClaimMgr(claimMgr)

-

}

-

}

-

}

-

-

def makeAttendant() :any {

-

def claimMgr

-

def attendant {

-

to setClaimMgr(mgr) :void {bind claimMgr := mgr}

-

to parkCar(car) :any {

-

# ...code to park the car...

-

return claimMgr.makeClaim(car)

-

}

-

to retrieveCar(claim) :void {

-

def car := claimMgr.reclaim(claim, attendant)

-

# ...code to retrieve car...

-

# no need to return the car reference, the owner of the claim

-

# presumably already has such a reference

-

}

-

}

-

return attendant

-

}

-

-

def legitAttendant := makeAttendant()

-

valetService.authorizeAttendant(legitAttendant)

-

-

def carThief {

-

to parkCar(car) :any {

-

return println ("Ha! stole the car")

-

}

-

to retrieveCar(claim) :void {

-

try {

-

legitAttendant.retrieveCar(claim)

-

println("Ha! didn't get car, but got control")

-

} catch prob {println(`rats! foiled again: $prob`)}

-

}

-

}

-

-

def carOwner := {

-

def car{}

-

var claim := null

-

def carOwner {

-

to letValetPark(attendant) :void {

-

claim := attendant.parkCar(car)

-

println(claim)

-

}

-

to letValetRetrieve(attendant) :void {

-

def noTransferClaim := claim.makeNontransferableClaim(attendant)

-

println(attendant.retrieveCar(noTransferClaim))

-

}

-

}

-

}

-

-

carOwner.letValetPark(legitAttendant)

-

carOwner.letValetRetrieve(carThief)

-

carOwner.letValetRetrieve(legitAttendant)

-

</nowiki>

-

-

</pre>

-

-

Note an important implication in this pattern: the ClaimMgr winds up in a position to accumulate references to all the entities a claim check holder treats as an attendant. In this example, the ClaimMgr gets a reference to the carThief as a result of the carOwner's casual handing out of claim checks. It seems unlikely that the ClaimMgr can harm the carOwner's interests with this reference, but in other circumstances the reference may not be so harmless. To reduce our risk exposure to the alleged attendants to whom we hand the claim check, we have increased our risk exposure to the ClaimMgr.

-

-

Note also the limitations on the nontransferability of this pattern. The carOwner can still transfer authority, either by handing someone a reference to the car, or by handing out the transferableClaim from which nontransferableClaims can be made. The nontransferability is voluntarily chosen by the carOwner. You can't prevent people from delegating their authority, and even this pattern doesn't change that fact.

Voluntary Oblivious Compliance, or VOC, was pioneered by the [http://www.erights.org/history/client-utility.html Client Utility System]. VOC is a field only recently recognized by the capability-security community as an important area of exploration; the claim check patterns here are early fruit of that research. <br /><br /> VOC is irrelevant in the classical cypherpunk view of the world, where every person is a rugged individualist making all his own authority-granting decisions with little regard for other people's issues. In a world filled with corporations, governments, and other policy-intensive organizations, however, it is an area of real value even though it is not really a security matter. Why is it not security? We consider it beyond the scope of "security" for a simple but compelling reason: it is not enforceable. Unenforceable security in cyberspace has been proven, over and over again in the course of the last decade, to be a joke played on all the participants. VOC states in its very name the limits of its applicability: it only works with volunteers. Don't be fooled into thinking this is security.

-

|}

-

-

Let us consider a somewhat different problem. Suppose that different attendants are trusted to handle different cars by the valet service. One valet has a motorcycle license and parks all the Harleys. Another has a multi-engine pilot's license and parks all the Boeing 747s. Of course, the one with the motorcycle license is a teenager who has always wanted to try his hand at parking a 747, and knows his lack of experience is not a problem. In this situation, each attendant has a different set of authorities at his command; just because you hand your claim check to a legit attendant doesn't mean the valet service thinks it would be a good idea to let that attendant drive your vehicle. A more generalized way of stating the problem is, in this case the authorities of the individual receivers of the claim checks vary, and management of the individual receiver's authorities is beyond the scope of what the ClaimManager should be trying to figure out. After all, the individuals know all their own authorities; it would be poor design (and unmaintainable at scale) for the ClaimManager to try to duplicate this information.

-

-

This situation, while not a part of the real-world valet service problem, has a significant area of application in cyberspace. This area is ''Voluntary Oblivious Compliance'', or VOC. Let us consider a more sensible example. Alice works for HPM, and Bob works for Intil. HPM and Intil are often competitors, but Alice and Bob are working on a joint project from which both companies expect to profit handsomely. The Official Policy Makers of HPM have identified a number of documents which can be shared with Intil, and indeed have forwarded to Intil references to the allowed docs. Alice wants to refer Bob to a particular HPM document, but ''only if sharing the document is allowed'' under the HPM policy. In this case, the VOCclaimCheck that Alice sends to Bob demands that Bob demonstrate to HPM's ClaimMgr that he already has authority on the document before fulfilling the claim. To prove his authority, Bob sends to the ClaimMgr all the HPM doc authorities he has (i.e., the list of docs HPM handed to Intil). Only if both Alice and Bob already have authority on the object does Bob get the document. This is sometimes called the Loan Officer Protocol, in reference to the old adage that a loan officer will not give you the loan unless you first prove that you don't need it.

-

-

Since bob can't get the reference unless he proves he doesn't need it, we can now see why this is a pattern of Voluntary Oblivious Compliance. It is voluntary, because Alice could just send Bob the document and circumvent the system. And it is oblivious, because Alice doesn't need to know whether a document can be shared with Bob before sending it.

-

-

Going back to the parking lot attendant who wants to park the 747, he has to demonstrate to the ClaimManager that he has the keys to the 747 before the Claim Manager will let him go for it. The owner of the 747 is much relieved.

-

-

Humorous as the 747 example might be, we will now switch to the scenario with Alice, Bob, HPM, and Intil for our sample code. The basic strategy is that the ClaimMgr not only examines the claim check from Alice, it also examines the list of authorities that Bob hands over to see if any of Bob's authorities match the one inside Alice's claim.

-

-

<pre>

-

-

<nowiki># E sample

-

def makeBrandPair := <elib:sealing.makeBrand>

-

def [claimSealer, claimUnsealer] := makeBrandPair("claim")

-

def HPMDocClaimMgr {

-

to makeClaim(doc) :any {

-

return claimSealer.seal(doc)

-

}

-

to matchClaim(claim, listOfCandidates) :any {

-

def doc := claimUnsealer.unseal(claim)

-

for each in listOfCandidates {

-

if (doc == each) {return each}

-

}

-

throw("No match from claimCheck to candidate auths")

-

}

-

}

-

-

def intilSharedDocs := [def doc1{}, def doc2{}, def doc3{}]

-

-

def bob {

-

to lookAtHPMDoc(claim) :void {

-

try {

-

def doc := HPMDocClaimMgr.matchClaim(claim, intilSharedDocs)

-

println(`reading doc: $doc`)

-

} catch prob {

-

println(`can't read: $claim`)

-

}

-

}

-

}

-

-

def alice {

-

to sendDoc(doc, recipient) :void {

-

def claim := HPMDocClaimMgr.makeClaim(doc)

-

recipient.lookAtHPMDoc(claim)

-

}

-

}

-

alice.sendDoc(doc2, bob)

-

alice.sendDoc(def doc4{}, bob)

-

</nowiki>

-

-

</pre>

-

-

The oblivious claim check pattern can require scattering even more authority to the winds than the nontransferable claim check. The recipient of the claim check (bob) needs to have all the authorities that alice might give to him, rather than merely the ones she actually does give to him. And the trusted third party who matches the claim check against the list of possibilities (the HPMDocClaimMgr) gets to accumulate authority to everything that bob thinks alice might be trying to send him. In the example scenario, this is just fine. But some care is required.

-

-

====Oblivious Claim Checks as Guards====

-

-

An elegant style of using VOC claim checks is to set up the pattern as a pair of guards. Alice would use the ClaimCheck guard to send a reference to Bob, and Bob would use the CheckedClaim guard, with the list of candidate references, to receive the authority. We will show the implementation of the BuildGuards ClaimCheck and CheckedClaim guards in [[Walnut/Advanced_Topics|Advanced Topics]] when we talk about writing your own guards, but their usage would appear as follows:

The ClaimCheck guard coerces the doc into a sealed representation of itself. The CheckedClaim guard unseals the claim and matches it against the authorities in the intilSharedDocs list. As a guard, it is better style to return a broken reference than to throw an exception if something goes wrong. In the parameter guard, if an exception were thrown, the recipient would have no chance to do anything with the problem, the exception would be thrown directly back to the sender before the recipient was even aware there was an issue.

-

-

====Oblivious NonTransferable Claim Check====

-

-

It is possible to make nontransferable oblivious claim checks as well. We leave the code as an exercise for the reader.

-

-

====Powerbox Capability Manager====

-

-

The powerbox pattern collects diverse elements of authority management into a single object. That object, the ''powerbox'', then becomes the arbiter of authority transfers across a complex trust boundary. One of the powerbox's most distinctive features is that it may be used for dynamic negotiations for authority during operation. The less trusted subsystem may, during execution, request new authorities. The powerbox owner may, in response to the request, depending on other context that it alone may have, decide to confer that authority, deny the authority, or even grant the request authority after revoking other authorities previously granted.

-

-

The powerbox is particularly useful in situations where the object in the less trusted realm does not always get the same authorities, and when those authorities may change during operation. If the authority grant is static, a simple emaker-style authorization step suffices, and a powerbox is not necessary. If the situation is more complex, however, collecting all the authority management into a single place can make it much easier to review and maintain the extremely security-sensitive authority management code.

-

-

Key aspects of the powerbox pattern include:

-

-

* A powerbox uses strict guards on all arguments received from the less trusted realm. In the absence of guards, even an integer argument received from untrusted code can play tricks: the first time the integer-like object (that is not really an integer) is asked to "add", it returns the value "123"; but the second time, it returns the value "456", with unpredictable (and therefore insecure) results. An ":int" guard on the parameter will prevent such a fake integer from crossing into your realm.

-

* A powerbox enables revocation of all the authorities that have been granted. When you are done using a less trusted subsystem, the authorities granted to it must be explicitly revoked. This is true even if the subsystem is executing in your own vat and you nominally have the power to disconnect all references to the subsystem and leave the subsystem for the garbage collector. Even after being severed in this fashion, the subsystem will still exist for an unbound amount of time until the garbage collector reaches it. If the authorities it has received have not been revoked, it can surprise you with its continued operations, and continued use of authority. Not all kinds of objects in the Java API can be made directly revokable at this time, because an <span class="e">''E''</span> revokable forwarder cannot be used in all the places where an actual authority-carrying Java object is required. For example, the untrusted object may want to create a new image from an url. The natural way of doing this would be to call the Java Icon class constructor with (url) as the argument. But if an E forwarder were handed to this class constructor, the constructor would throw a type exception.Because of this anomally, different solutions are required in different situation. For the icon maker, it might be acceptable to require that the untrusted object read the bits of the image from the url (through a revokable forwarder) and then convert the bits into an icon.

-

* A new powerbox is created for each subsystem that needs authorities from within your trust realm. If a powerbox is shared across initiations of multiple subsystems, the powerbox may become the channel by which subsystems can illicitly communicate, or the channel by which an obsolete untrusted subsystem can interfere with a new one. When the old untrusted subsystem is discarded, its powers must all be revoked, which necessarily implies that a new subsystem will need a new powerbox.

-

-

In the following example, the less trusted object may be granted a Timer, a FileMaker that makes new files, and a url. The object may request a different url in the course of operations, in which case the powerbox will ask the user for authorization on the objects's behalf; the old url is revoked, and the new one substituted, so that the object never has authority for more than one url at a time. The object that operates the powerbox may revoke all authorities at any time, or it may choose to revoke the Timer alone. Finally, the operator of this powerbox may, for reasons external to the powerbox's knowledge, decide to grant an additional authority during operations, an authority whose nature is not known to the powerbox.

-

-

<pre>

-

-

<nowiki># E sample

-

# The authorized url may change during operations, so it is a var

-

def </nowiki>'''''makePowerboxController'''''(''optTimer'',

-

''optMakeFile'',

-

var ''optUrl'',

-

''optMakeUrl'',

-

''makeDialogVow'') {

-

-

# In the revokers map, the object being revoked is the key, the revoker

-

# is the value. Note it is the actual object, not the forwarder to the

# now, show how to create a powerbox and hand it to an untrusted object

-

-

def </nowiki>'''''makeUntrustedObject'''''(''powerbox'') {

-

def ''timer'' := powerbox.optCap("TIMER")

-

def ''urlVow'' := powerbox.requestUrl("http://www.skyhunter.com")

-

def '''''untrustedObject''''' {

-

#...

-

}

-

return untrustedObject

-

}

-

-

def '''''powerboxOperator'''''() {

-

def '''''makeDialogVow''''' {#...construct dialog vow

-

}

-

# use real objects, not nulls, in typical operation, though nulls are valid arguments

-

def ''controller'' := makePowerboxController(null,

-

null,

-

null,

-

null,

-

makeDialogVow)

-

def ''powerbox'' := controller.getPowerbox()

-

def ''untrusted'' := makeUntrustedObject(powerbox)

-

# later, confer an additional authority

-

def ''special''

-

controller.conferCap("SPECIAL", special)

-

# when done with the untrusted object, revoke its powers

-

controller.revokeAll()

-

}

-

-

</pre>

-

-

Both the file facet and the url facet restrict the user to methods that return immutables. If these facets returned mutables, like streams, then those mutables would also have to be wrapped in revokable forwarders. In the general case, this requires the use of a membrane. <span class="note" style="color:red"> write section on membranes, and link</span>

-

-

====Pet Names and Forgery among partially trusted participants====

-

-

E's encrypted authenticated references ensure that random attackers get no traction trying to break into your distributed system. As mentioned in the earlier minChat audit, this means that all interesting break-ins will be made by "insiders", i.e., people who have been granted some capabilities inside your system. We have talked so far about low-level attacks that attempt to acquire or use inappropriately conveyed authority. At a higher conceptual level, people can try to present themselves as someone else and get your user to intentionally, but erroneously, grant other information or capabilities.

-

-

minChat dodged this problem because it is a 2-person chat system: the person at the other end of the line will have difficulty convincing your user he is someone else, because your user knows the one and only person given the URI (because he used PGP encryption as described earlier to ensure that only one person got it). But in a multi-person chat system, Bill could try to pretend to be Bob and lure your user into telling him Bob's secrets.

-

-

There is no single all-purpose solution to this risk. However, one critical part of any specific solution must be this: Do not allow the remote parties to impose their own names for themselves upon your user.

-

-

[[Pet_Names]

-

-

The general solution is to use ''pet names'', i.e., names your user chooses for people, regardless of how those people might name themselves. Of course, it is reasonable to allow the other people to propose names to your user (called ''nicknames''), but the user must make the final decision. The user must also be able to change her mind at a later date, when her list of participants grows to include not only Bob Smith but also Bob Jones, and she must disambiguate the pet name "Bob" she had had for Smith.

-

-

The general layout of such a pet name setup might be as follows:

-

-

<pre>

-

-

<font color="rgb(255, 0, 0)">pet name sample</font>

-

-

</pre>

-

-

===Example: Satan at the Races===

-

-

Earlier we presented a single-computer racetrack game as an example. Here we present a distributed version of Racetrack, allowing people to compete over the network. And, just for a bit of spice, we have thrown in the assumption that one of the drivers will be Satan. Satan will, of course try to win...or try to wreck other people's cars (keeping his own safe, of course)...or try to ensure that someone Satan favors will win, presumably someone who has offered up his soul in exchange for victory.

-

-

Fortunately, as we all know from the story of the time the Devil came down to Georgia, as long as we can force Satan onto a fair playing field, he can indeed be beaten. We will need scrupulous capability security, however, to keep that field fair.

-

-

As noted earlier, security within the <span class="e">''E''</span> framework is largely a matter of architecture. Therefore we will look at the issues for a secure distributed architecture before we look at the code.

-

-

First of all, where do we want to put the divide between the server functionality and the client functionality? If the author of this book owned stock in a diskless workstation company, we would undoubtedly present a "thin client" architecture, wherein everything except user interface code lived on the server. We will look at other possibilities here, however, since other possibilities abound.

-

-

E allows us to distribute the computation to suit our needs. We could build a thin client, a thin server, or anything that lies between these extremes. The particular architecture strategy we will use for this security example, with our souls on the line because Satan is coming to the party, is a "thin network" layout. We will minimize the amount of information being sent back and forth between the server (owned by the trusted third party who authoritatively asserts who won the race) and the clients (owned by the race car drivers, including Satan). We will especially minimize the number and power of the references (capabilities) the clients have on objects on the server, since this is the major path Satan will have for attacking the system.

-

-

With this as the architectural goal, we can reduce the information flow from client to server to merely proposals for the acceleration changes for the cars. Each driver (i.e., each client) should have access to the server only to the extent of being able to tell their respective cars on the server what acceleration the driver proposes. The clients must absolutely not have access to any cars on the server but their own. Consequently, a method like, "setAccelerationForCar(name)" cannot be the way car directions are specified: Satan, upon learning the names of the cars, could easily start specifying accelerations for everyone's cars, not just his own.

-

-

The server will initially send out maps to the clients: rather than giving the clients read authority on the raceTrack's map, the server will send each client an immutable pass-by-copy description of the map and the car starting locations. Sending out immutable copies that are locally accessible will simplify the code, reduce the communication needed during the actual game, and most significantly, will eliminate a set of interfaces that would need security audits.

-

-

As in the single-computer version of the game, the raceTrack will collect accelerations input by the drivers until it has accelerations from all the drivers, then it will authoritatively describe those accelerations to all the drivers. Note the careful language we use to describe the accelerations being sent around the system: driver clients send ''proposed ''accelerations for their own accelerations to the server; the server sends ''authoritative'' accelerations to the clients. Satan's easiest technique for assaulting the track would be to simply modify his client so he can specify accelerations beyond -1,0,+1; it would be a substantial advantage for Satan to be able to accelerate +15 in a single turn. So the server must validate the accelerations, and not rely on validation in the user interface as the single-computer version did.

-

-

As noted earlier when discussing pet names, another clever attack is to forge someone else's signature. If Satan could pretend to be whichever driver actually wins the race, and persuade people to believe him, it would be as good as actually having won (and quite a bit sweeter as well). This racetrack does not use the full-strength pet name strategy to prevent this. Rather, the server assigns each car a name. No negotiation is allowed.

-

-

So far, we have only considered attacks that can be made from the client on the server. Are there any security concerns going the other way, i.e., secret information that, if acquired by Satan, would improve his situation? If this were a game of poker, not a game of racetrack, the answer would be yes, and we would have to audit the data being sent to the client as well as the capabilities. For racetrack, however, this is not an issue.

-

-

What else can Satan do to sabotage the race? He can, of course, flood the server with spurious messages in a denial of service attack. As stated earlier, <span class="e">''E''</span> by itself cannot prevent this. But for this critical game, we can assume the trusted third party operating the server will work with the ISPs to prevent it from inhibiting game play. There is one other denial-of-service attack Satan can undertake, the simplest attack of all: he can simply walk away from his computer, leaving the game with one car that never gets new acceleration instructions. In the single-computer version of racetrack, the game never stepped forward to the next turn until all the cars had new instructions. Left unchanged, this would allow Satan to starve all the drivers to death and then claim victory. Consequently, we must put a time limit into the distributed version, so that game play continues even if one of the players loses his soul in the middle of the competition.

-

-

That completes the security-oriented architectural considerations for the racetrack. One other architectural note: The cars will be implemented using the ''unum'' design: each client system will have a ''client presence'' of the car locally instantiated, synchronized in each game turn with the ''host presence'' of the car that resides on the server. As noted briefly earlier, the unum is a supercharged version of the Summoner pattern; in the Summoner pattern synchronization is not performed. <font color="#ff0000">Before this becomes cast in stone, I don't love "host presence". "Authoritative presence" is too long. How 'bout "essence"? Or "true presence"?</font>

Secure Distributed Computing

Capabilities

To be more specific, E uses object-capabilities, a type of capability security that has strengths not found in several weaker "capability" systems, as discussed in Paradigm Regained.

E uses capability based security to supply both strong security and broad flexibility without incurring performance penalties. Capabilities might be thought of as the programming equivalent of physical keys, as described with the following metaphors.

Principle of Least Authority (POLA)

When you buy a gallon of milk at the local 7-11, do you hand the cashier your wallet and say, "take what you want and give me the rest back?" Of course not. In handing the cashier exact change rather than your wallet, you are using the Principle of Least Authority, or POLA: you are giving the cashier no more authority than he needs. POLA is a simple, obvious, crucial best-practice for secure interactions. The only people who do not understand the importance of POLA are credit card companies (who really do tell you to give that far-off Internet site all your credit, and hand back what they don't want), and computer security gurus who tell you to use more passwords.

Children with your ID badge

Suppose all security in the physical world were based on ID badges and ID readers. At your home you might put an ID reader on your door, another on your CD cabinet, and another on your gun vault. Suppose further you had to depend on 4-year-old children to fetch your CDs for you when you were at the office. How would you do it? You would hand your ID badge to the child, and the child could then go through the front door and get into the CD cabinet. Of course, the child with your ID badge could also go into the gun vault. Most of the children would most of the time go to the CD cabinet, but once in a while one would pick up a gun, with lamentable results.

Keys

In the real physical world, if you had to depend on children to fetch CDs, you would not use an ID badge. Instead you would use keys. You would give the child a key to the front door, and a key to the CD cabinet. You would not give the child a key to the gun vault.

All current popular operating systems that have any security at all use the ID badge system of security. Windows, Linux, and Unix share this fundamental security flaw. None come anywhere close to enabling POLA. The programming languages we use are just as bad or worse. Java at least has a security model, but it too is based on the ID badge system--an ID badge system so difficult to understand that in practice no one uses anything except the default settings (sandbox-default with mostly-no-authority, or executing-app with total-authority).

The "children" are the applications we run. In blissful unawareness, we give our ID badges to the programs automatically when we start them. The CD cabinet is the data a particular application should work on. The gun vault is the sensitive data to which that particular application should absolutely not have access. The children that always run to get a gun are computer viruses like the Love Bug.

In computerese, ID badge readers are called "access control lists". Keys are called "capabilities". The basic idea of capability security is to bring the revolutionary concept of an ordinary door key to computing.

Melissa

Let us look at an example in a computing context, of how keys/capabilities would change security.

Consider the Melissa virus, now ancient but still remembered in the form of each new generation of viruses that use the same strategy the Melissa used. Melissa comes to you as an email message attachment. When you open it, it reads your address book, then sends itself - using your email system, your email address, and your good reputation - to the people listed therein. You only had to make one easy-to-make mistake to cause this sequence: you had to run the executable file found as an attachment, sent (apparently) by someone you knew well and trusted fully.

Suppose your mail system was written in a capability-secure programming language. Suppose it responded to a double-click on an attachment by trying to run the attachment as an emaker. The attachment would have to request a capability for each special power it needed. So Melissa, upon starting up, would first find itself required to ask you, "Can I read your address book?" Since you received the message from a trusted friend, perhaps you would say yes - neither Melissa nor anything else can hurt you just by reading the file. But this would be an unusual request from an email message, and should reasonably set you on guard.

Next, Melissa would have to ask you, "Can I have a direct connection to the Internet?" At this point only the most naive user would fail to realize that this email message, no matter how strong the claim that it came from a friend, is up to no good purpose. You would say "No!"

And that would be the end of Melissa, all the recent similar viruses, and all the future similar viruses yet to come. No fuss, no muss. They would never rate a mention in the news. Further discussion of locally running untrusted code as in this example can be found later under Mobile Code.

Before we get to mobile code, we first discuss securing applications in a distributed context, i.e., protecting your distributed software system from both total strangers and from questionable participants even though different parts of your program run on different machines flung widely across the Internet (or across your Intranet, as the case may be). This is the immediate topic.

Language underpinnings for capabilities

There are a couple of fundamental concepts that must be gotten right by a programming language in order to use capability discipline. We mention these here.

Memory Safety: Reach objects through references, not pointers

Pointer arithmetic is, to put it bluntly, a security catastrophe. Given pointer arithmetic, random modules of code can snoop vast spaces looking for interesting objects. C and C++ could never support a capability system. Java, Smalltalk, and Scheme, on the other hand, did get this part of capability discipline right.

Object Encapsulation

In a capability language you can not reach inside an object for its instance variables. Java, Smalltalk, and Scheme pass this test as well.

In JavaScript, as a counterexample, all instance variables are public. This is occasionally convenient but shatters any security hopes you might have. JavaScript is a relatively safe language only because the language as a whole is so thoroughly crippled. We consider JavaScript safe, but not secure: We consider security to require not only safety but also the power to get your work done: POLA means having enough authority, as well as not having too much. Using this definition of security, the Java applet sandbox is mostly safe, but not at all secure. And a Java applet that has been allowed to run in a weaker security regime because the applet was "signed", is neither safe nor secure.

No Static methods that grant authority, no static mutable state

In a capability system, the only source of positive authority for an object should be the references that the object holds.

Java fails here, along with Smalltalk and Scheme. A famous example of the trouble that static mutable state can get you appeared in Java 1.0 (corrected in 1.1, an upward compatibility break so rarely used they could get away with it). The object System.out, to which people routinely sent print messages, was replaceable. A programmer in the middle of a bad hair day could easily replace this object, reading everything everyone else was doing, and preventing anyone else from reading their own outputs.

Carefully design the API so that capabilities do not leak

You can make everything else right, but if the APIs for a language were designed without consideration of security, the capability nature of the system is seriously flawed. Let us consider an example in Java. Suppose you had an analysis program that would present graphs based on the contents of an existing spreadsheet. The program only needs read access on the one spreadsheet file, it needs nothing else in your file system. In Java, then, we would grant the application an InputStream.

Unfortunately, an InputStream in Java leaks authority. In this example, you could "cast down" to a FileInputStream, from which you could get the File, from which you could get a WriteStream and a whole filepath, which would give you total access and control over the entire directory system.

To fix this problem for a single object in a single application, you could write your own InputStream class that doesn't leak. This strategy does not scale, however: requiring the programmer to write replacements for every object in the API to achieve security will result in few secure programs (just as requiring the programmer to write his own bitmap handlers and gui widgets will result in few point-and-click programs). To really fix this security problem in Java, you would have to rewrite the entire java.io package, wrecking backward compatibility as a side effect. Without that kind of massive, serious repair, it is always easier to create a breach than to create a secure interaction. With an infrastructure actively trying to harm you, what chance do you really have?

E Capabilities

E has no pointer arithmetic. E has no mutable statics. E has an API carefully thought out to prevent capability leaks. This would make it a capability secure language for single-processor applications. But E goes a step further. It takes the concept of a secure, unforgeable object reference and extends it to distributed objects:

The communication links are encrypted. Third parties cannot get inside the connection.

The objects are unfindable without a proper reference received (directly or indirectly) from the creator of the object. You must have the key to unlock the door.

No object can pretend to be the object you are trying to contact because identity cannot be hijacked.

These aspects of E protocol can be understood better by looking at the URI that identifies an E object to other objects outside its own vat.

A closer look at E capability URIs

The whole E approach to security is disorienting to anyone steeped in traditional computer security lore. On the other hand, anyone with a background in object-oriented programming will find it to be a natural extension of the OO discipline. Another entire book is needed on the philosophy of security exemplified by object capabilities and E. The closest thing to such a Philosophy of Security known to the author is a requirements document for the shared virtual 3D world Croquet. Croquet needs to be both very user friendly and very secure, and so requires the kind of seriousness that object-capabilities enable.

A brutally abbreviated list of points from that document includes:

No Passwords, No Certificates, No Certificate Authorities, No Firewalls, No Global Namespaces. POLA, pet names, and other related techniques described here supplant them all.

Minimize Authentication. Focus on Authorization. Authentication-laden systems are as user friendly as post-9/11 airport security systems. And they work about that well, too. The terrorists are far more likely to have their identification in order than the rest of us.

Do Not Prohibit What You Cannot Prevent. The main purpose of this advice is to prevent you from looking like a fool after you've been circumvented.

Embrace Delegation. People must delegate to succeed. So they will delegate despite your most ridiculous efforts. Relax, enjoy it. Help them. And while relaxing, re-read the earlier, "do not prohibit what you cannot prevent."

Bundle Authority with Designation. This makes security user-friendly by making the security invisible. Only possible after you stop wasting your effort on authentication all the time.

Security as an inexpensive lunch

There is no such thing as a free lunch, but this does not rule out the possibility of lunches at bargain prices. When programming in E, you are automatically working in a capability secure environment. All references are secure references. All powers are accessible only through capabilities. Making an E program secure is largely a matter of thinking about the architecture before you code, and doing a security audit after you code. When designing and auditing you use a small number of principles for scrutinizing the design:

Principle of Least Authority (POLA) for Computer Programs

The Principle of Least Authority (known henceforth as POLA), which has been used by human beings instinctively for thousands of years, translates to computer programs fairly straightforwardly: never give an object more authority than it needs. In particular, if an object may be running on a remote untrusted machine, think very carefully about the minimum set of authorities it needs, and give it capabilities only on facets (described later) that ensure it gets no more. A simple word processor needs read and write access to the one file it is being used to edit, and it needs read-only access on all the system fonts. It does not need any access to anything else. Do not give it access to anything else. If it asks for something else, it is lying to you, and you should not trust it.

Principle of Hardware Software Ownership

When developing software, remember that the person who controls the hardware always, at the end of the day, controls the software. Hence, if you send someone a piece of a distributed program to run on their own equipment, that person totally and utterly owns everything that resides on his machine. They can modify the code you gave them, or rewrite it from scratch yet make it look from the outside like it is identical. You must therefore think carefully about what features of your system you really trust on that remote machine. A key feature of E that enhances its reliability is that objects which are manufactured in a particular vat remain resident in that vat, so that the objects remain as reliable as the objectMakers used to produce them. Only transparent immutables (immutables that don't encapsulate anything) actually move across computational boundaries.

Many people have made the error of believing this principle of hardware ownership can be circumvented. At the time of this writing, the music recording industry is throwing away truly fabulous sums of money on schemes that assume they can somehow control data after it has arrived on a user's own hardware. Microsoft is busily developing Palladium (uh, I mean, NGSCB). Intel is busily developing TCP (uh, I think they changed the name to La Grande). Their fate has already been foretold in the fate of the popular game Diablo I: authoritative representations of data were allowed to reside on user computers, assuming that the object code was too difficult to understand to be hacked and that the Diablo client code would always behave as the designers intended. They were 99% correct, which in the computer security field means they were 100% wrong. Today, 99% of the people who hack Diablo I don't understand the object code. But somewhere some single individual figured it out and posted the result on the Web. Now your grandmother can sabotage shared Diablo I games as quickly and easily as the most accomplished hacker in history. For Diablo II, the developers had learned the lesson. Authoritative information is stored only on the server, giving them the beginnings of true security.

Not only does hardware own the software, so too does the underlying operating system. As we have stated repeatedly here, E enables the construction of extremely secure computing systems. However, E systems can still be attacked from below, by viruses granted authority by the operating system underneath the E world. Such attacks can only be prevented by completing the security regime, either with capability-based operating systems, or with computers on which only capability-secure programs are allowed. There is one open-source capability-based OS under development, at www.eros-os.org.

Denial Of Service Attacks

One form of attack that even E cannot defend against is denial of service (DOS). In a denial of service, someone simply swamps your system with requests, making your system unable to operate. Such attacks take many forms, but the key limitation in such attacks is this: such an attack can never steal your secret data or gain control of your sensitive operations. If DOS were the only kind of attack possible, the world would be a vastly better place. Indeed, if only DOS attacks were possible, even most modern DOS attacks would fail, because the serious DOS attacks require that the attacker first gain control of hundreds of computers that belong to other people, by using attacks far more invasive and damaging than DOS itself.

Example: Auditing minChat

The minChat application presented earlier as the example for a distributed system was designed with no consideration at all for security issues. This is literally true: minChat is derived from eChat, which was written by the author as his first practice exercise to learn E. However, because eChat was written in a capability secure environment, because the author used a fairly clean modular architecture, and because clean architecture in a capability-secure infrastructure makes its own luck, there were no serious security breaches in eChat. And, as we shall see, there are no serious security breaches in minChat, though there are many interesting lessons we can learn from it.

First of all, in minChat as in all E programs, the communication is all encrypted and the objects are all unfindable and unguessable. Therefore no third party can enter the conversation to either eavesdrop or forge messages. The only source of security breach will be the person at the other end of the chat system, i.e., the person with whom you planned to chat. So, right off the bat, simply by using E you have eliminated outsider attacks. But insider attacks are still possible, and indeed constitute the most serious threat to most security systems anyway. You have to be able to achieve cooperation in the presence of limited trust. Cooperation despite limited trust is something we humans achieve every day, for example when paying 3 bucks for a gallon of milk at the local QwikMart. But it is notoriously difficult to get these transactions working correctly on computer systems when using conventional languages. So we will examine minChat very closely to ensure that the person we want to chat with does not also get any inappropriate powers, like the ability to delete all our files.

Now, let us begin with a quick review of minChat's code to see what parts of the system we need to examine to do a security review. As you may recall from the chapter on Ordinary Computing, emakers come into existence with no authority whatsoever, and receive authority only via the arguments that are passed into their objects. In larger programs, one can often ascertain that the emaker receives so little authority it can be disregarded from a security perspective: if it doesn't have enough authority, it just can't be a danger (as documented in the DarpaBrowser Final Report and Security Review).

MinChat is just 2 pages of code. It does not use emakers, so this does not help in our current situation. However, what does help is that the only object accessible to the outside world is the chatController, whose reference is sent to the friend (or, in this case, perhaps the enemy) with whom we wish to chat (hey, we need to talk to our enemies too, from time to time). What unpleasantness can our friend do on our computer through the chatController interface he receives? We will reproduce the crucial lines of code here for convenience:

There are only 5 messages here. Let's go through what our friend could do with each one of them in sequence.

send(message): This is the method that sends a message to our friend. If our friend wants to send messages to himself, which we can ourselves read in our own chat window, this is amusing, but not very exciting as an attack.

receive(message): This is one of the 2 messages our friend is supposed to use, to send us a message. The message gets placed in the chatArea text pane. Since we are receiving data here, even a cracker of modest powers would look at this and leap quickly to the conclusion that there is an opportunity here for a buffer overflow attack--the type of attack that, at the time of this writing, gets the most news media headlines because Windows is so richly riddled with such exploits.

Memory safe languages including E are not in general susceptible to buffer overflow attacks: if you overrun the end of a data structure of fixed size, you get an exception, not a data write into random memory. Alas, this is not the end of the story. The text areas in the widget toolkit are native widgets. Eventually, we send this text string of unbound length to native code, written in an unsafe language in an unsafe way with unsafe quantities of authority. So buffer overflow attacks, and other attacks based on the strategy of sending a stream of data that will surprise a decoding algorithm into unexpected behavior, are possible in theory. This will remain a risk until all the widgets in our toolkits are also written in object-capability languages.

The only good news here is that the text widget uses a very simple data type, and has been attacked by accident by so many millions of programmers over the years that it has had to be fixed just so the system doesn't crash if you accidentally drag/drop a DLL file onto Notepad. So even under Windows, the text widget is robust against strange data. But this is an area of risk of which we must always be aware when using components written outside the capability paradigm. Text widgets are pretty safe, gif and bmp widgets are probably safe, but by the time you encounter a widget using manically optimized C to decode mpeg-4, you're looking at a widget that is complicated enough that it is probably vulnerable.

receiveFriend(friendRcvr): Our friend could specify someone who is not himself as the person we should send our messages to. Of course, our friend could write a slightly different version of minChat that does the forwarding to everyone he wants to see it, automatically. Or he could simply delegate the capability reference to someone else (i.e., hand a copy of it to another party). So this not very exciting either.

save(file): Now this is a very interesting method for attack. The opportunity for the friend to save data on our computer is an interesting one, and certainly outside the domain of what we intended for simple chatting. There's a little problem, however, from the attacker's point of view. The save method is receiving, as a parameter, a capability (i.e., a reference) to a local file. This capability is not forgeable by the friend unless he has access via other means to our file system, in which case we already granted him such power and are assuming he won't abuse it (if he has such power, he can abuse it more effectively by other means, abusing it by sending to minChat is silly). The upshot is, the worst our malicious friend can do is send us an object that is not actually a file object, which will not respond properly to messages being send under the assumption that the receiver is a near file object, and cause a "no such method" exception which will crash our session. So, our friend can crash minChat, but can't do any serious harm: an attack whose only result is to cut off the target from the attacker's attacks hardly constitutes a winning gambit.

load(file): Not only is loading a file less interesting than writing, or corrupting, one, but worse, this method once again expects a local file object to work with. Again, the worst case is that the attacker can cause a session crash. An interesting variant of the attack might be to try to ship an object that looks and feels like a file for minChat to play with. This would fail directly because the load operation is filling a def'd variable that is already bound, resulting in a thrown exception. But let us assume for a moment that we made the variable "friend" a var, rather than a def that was later bound. The result in this case, if successful, would simply be to redirect the minChat client to a different person for further communication (since what is being loaded is the reference to the friend). Still no joy for the attacker.

The upshot is, even the friend who has received the capability to talk to our chat tool doesn't get much traction. He can annoy us (by crashing minChat), but that is pretty much the only interesting thing he can do....or so it seems on first review.

Let's see if there is anything exotic an attacker could do by combining several of these unimportant attacks. First let us make the problem more concrete. Suppose Bob has a crush on Alice, who is dating Ted. Bob decides to make Alice think that Ted doesn't really like her. So he gives Alice his chat reference to Ted, and tells her that, when using this reference, Ted will think she is really Bob, and so Alice will be able to get Ted to "speak candidly" (to Bob) about her. At this point, Bob uses the "send" method on Ted's computer to send messages that Alice will read, messages that look to Alice like Ted is sending them! Bob sends cruel jokes about Alice, and Alice breaks up with Ted.

Hmmm....it looks pretty bad for Ted. Did we just find a really cool attack?

Well, sort of. First there is the risk that Ted will notice that his chat tool is, without any help from the owner sending offensive messages to Bob. Ted might then wonder what is going on. But more fundamentally, Bob has a much simpler scheme for messing with Alice's headspace, if Alice is open to this type of attack.

Far simpler for Bob would be to create 2 brand new chat capabilities, one that is Bob's FakeTed, and the other is Bob's FakeBob. He has Alice start up as FakeBob, and Bob himself starts up FakeTed. Now Ted is completely out of the loop, Bob can send any horrid thing to Alice he wants to send, without interference.

So is this a fundamental flaw with the entire E security approach? Not exactly. This attack can be made regardless of the technology being used -- Bob could have created 2 Yahoo Instant Messenger accounts and played the same game. Indeed, variations on this attack were used repeatedly by William Shakespeare throughout his career to create both high tragedy and low comedy. The fundamental problem comes when you trust a middleman as the sole validation of the authenticity of messages coming from another person. This is a human problem, not a technology problem. The best source of contact information for a person is the person him/herself; trust a third party at your own risk. Often the risk makes sense, but don't ever forget there is a risk there, whether you are using capability security or just a parchment and quill.

We have come full circle to the conclusion that minChat doesn't have any serious security breaches. Nonetheless, there are a couple of annoying things Bob may do to Ted. These annoyances can be traced directly to the presence of five messages in the chatController: there are only two messages out of the five that the friend is "supposed" to use. Even forgetting about security reviews for a moment, it is clear that, if the friend is only supposed to have two methods, he should not have five methods at his disposal. This make sense from a simple modular design point of view, it limits the number of mistakes you can make. For the security reviewers, this is an enormous win, because it reduces the number of methods to examine by 60%. This is a substantial gain, given how circuitous and complicated security analysis can become.

So how do we cut the number of methods available to the friend? By using a facet of the chatController that only has the appropriate methods:

# E syntax
# Place the following object above the binding of the chatController
def chatFacet {
to receive(message) {chatController.receive(message)}
to receiveFriend(friendRcvr) {chatController.receiveFriend(friendRcvr)}
}
# Now modify the chatController's save method to save the chatFacet, not the chatController:
to save(file) {file.setText(makeURIFromObject(chatFacet))}
# and similarly modify the load method to send the chatFacet not the chatController:
to load(file) {
bind friend := getObjectFromURI(file.getText())
friend <- receiveFriend(chatFacet)
}

This version of minChat is more secure (only a little more secure, though, since it was pretty secure already), and vastly easier to review.

Security Implications of FORTRAN-Style E

There are still a number of interesting lessons we can learn here by considering some variations on minChat. The first variation is based on the old adage, "You can write FORTRAN in any language." This is true in E as well, but the implications can be grievous.

Let's go back the minChat without the chatFacet, and consider the following small change to the program. Suppose that the save method in the chatController, instead of receiving a file object from the chatUI and creating the uri string itself, received the file path and the uri string from chatUI:

This is a fairly FORTRAN-ish way of accomplishing the goal: since FORTRAN doesn't have objects, it may seem perfectly obvious and reasonable to just send the path to the file and let the chatController figure out how to write to that file on its own.

However, in this situation, it is a disaster. Without the chatFacet, now the friend at the far end has a method he can call that allows him to write any data he wants, into any location the user can reach on his own computer:

# on the friend's machine
# put the code for the SubSevenServer Trojan in a string
def code :String := "@#$!...and so on, the character representations of the code bytes"
# The friend's friend is our poor user
# Put the trojan horse in the user's startup profile
friend <- save("~/startup/sub7.exe", code)

Now we're having some fun! Every time the user logs onto his computer, the friend gets full control of all his resources.

What went wrong here? There are several ways of slicing the pie and pinning different pieces of the code with blame. One we have already identified: we failed to follow POLA in allowing the friend to call the save method in the first place. But another one is just as crucial: we failed to follow POLA twice with this revised version of the save method that gets 2 strings.

We failed first by sending the save method an insufficient amount of authority (remember, Least Authority also means Adequate Authority). We did this by sending the chatController a string describing the file's path rather than just sending him the file itself. As a consequence, the chatController had to use special powers gotten from elsewhere to translate the description into a file object.

Once the chatController had to do this translation, it became vulnerable to the Confused Deputy Attack, a classic in security literature. The Confused Deputy attack is the aikido move of computer security: the attacker persuades the target to use the target's own authority against itself. Confused Deputy attacks are very hard to pull off as long as a single reference serves as both the designation of the object and the authority to use the object (as is always the case with the objects in a capability-based programming language). But when the authority to use the object is separated from the designation, as it is when a path string is being used to describe the file, you are in deep trouble.

Simply sending the file object rather than the file description solves this problem. The friend at the far end has no way to send an object that will be interpreted by the save method as a local file, and so the friend gets no traction whatsoever as long as the save method is expecting the object not the description. This leads to the general rule:

When you must read in the descriptions of objects rather than the objects themselves, translate them into capability secure objects at the earliest possible moment. When you must write out descriptions of objects rather than the objects themselves, postpone the translation of the object into a description until the last possible moment.

This rule is not merely good for secure, capability-oriented programming. Rather, it is good for all object-oriented programming. Manipulating the descriptions of objects is much less reliable, and much more error prone, than direct manipulation of the object itself--which is both why crackers like it, and object oriented designers avoid it. This exemplifies an interesting truth: capability oriented development is often just object oriented development taken seriously.

Working directly with the object, not a description, is a rule the author learned the hard way. A security review team ripped numerous holes in a single piece of the author's software, simply because the author foolishly worked with descriptions rather than with objects. The story is told in excruciating detail in the DarpaBrowser Security Review. If you are one of the rare and lucky individuals who are able to learn from other people's mistakes, rather than having to make the mistakes firsthand yourself first before you really get it, this is a magnificent lesson to learn secondhand.

Defense In Depth versus Eggshell Security

In large software systems, most objects are constructed inside emakers, rather than in the main E program body. Consequently they come to life with severely restricted, POLA-oriented authority(i.e., only the authority we send to them when we construct them, which tends to be what they need but not anything else). Let us emulate this characteristic of large systems in our little minChat by making a paranoid version of the chatFacet:

The universalScope eval method creates the makeFacet function in strict confinement, identical to the confinement imposed on emakers. Consequently, the only authority the chatFacet has is the authority handed in, namely, the authority to send messages to the chatController.

Now let's assume we are running a more complex program, with a more complex component, i.e., a component that is not so simple we can just look at it and see it doesn't do anything very exciting. Now let's further assume that by some extraordinary bit of legerdemaine the attacker succeeds in completely subverting the component (the chatFacet in our little example). How terrible is the disaster?

As we have already shown, there is no disaster. Even if the attacker gets direct access to the chatController, there still isn't anything interesting he can do. We have achieved defense in depth. A typical penetration of our system gives the attacker only a limited access to other objects, which in turn must be individually penetrated.

Compare this to the situation now typical in software engineering. Even with the most modern "secure" languages like Java and C#, an attack can acquire the full authority of the program by subverting even the most inconsequential object. With tools like access control lists and firewalls, we engage in "perimeter defense", which is more correctly described as "eggshell defense". It is like an eggshell for the following reason: while an eggshell may seem pretty tough when you tap on it, if you can get a single pinhole anywhere in the surface, you can suck out the entire yoke. No wonder cybercrackers laugh at our silly efforts to defend ourselves. We have thrown away most of our chances to defend ourselves before the battle even begins.

Capability Patterns

Facets

As discussed in the minChat security review, facets are objects that act as intermediaries between powerful objects and users that do not need (and should not be granted) its full power. We saw a facet in use in the audited version of the eChat program, where the chatFacet. was interposed between the chatController and the remote chat participant. Here is a little general-purpose facet maker:

Facets of this type can be retrofitted onto an existing system. We did this with very little effort for minChat with the chatFacet, but the technique works for far more complicated problems as well. The capability-secure windowing toolkits that E has placed on top of Swing and SWT uses facetization as the main tool.

Facets can be made much more sophisticated in their restrictions on access to their underlying object. You can make a facet that logs requests and sends email when certain methods are called. In an SEC-regulated stock exchange facet, you might wish to grant the capability to make a trade only from 9:30AM to 4PM excluding weekends and holidays.

One interesting example is the use-once facet, which allows the holder of the facet to use the facet only one time. For example, this version of a chatReceiver only allows a single message to be sent:

Avoid this. For a facet to remain secure during maintenance, it should never just delegate by default. If a new method is added to a powerful object (in this example, suppose powerfulObject is updated with a new method, doPowerful2()), it should not be exposed through the facet by default: rather, the facet must by default not expose it.

This risk can be exhausting to avoid, but it is always dangerous to accept. The first version of the capability windowing toolkit on Swing suppressed the handful of dangerous methods in the Java version 1.3 of Swing rather than explicitly allowing the safe methods, of which there were thousands. Within 30 days, the entire system was broken: the Java 1.4 Beta included hundreds of new, security-breaking methods. The only saving grace was that we had always known that the first version, thrown together in haste on weekends, was just a proof-of-principle and would have to be replaced. We just hadn't appreciated how soon replacement would be required.

Revocable Capabilities

If you wish to give someone restricted access to an object with facets, it is quite likely you will want to revoke access at some point as well.

The simplest way of making a capability revocable is to use a transparent forwarder that includes a revocation method:

Note that, even though the forwarder is nominally delegating all method invocations (except revoke()) to the capableObject, we cannot use the extends keyword to capture the behavior. "Extends" creates an immutable reference, so the fact that the capableObject is a var wouldn't allow you to revoke the delegating behavior. Instead we use the match[verb, args] pattern.

Capability revocation, like facet forwarding, can be based on complex sets of conditions: revoke after a certain number of uses, revoke after a certain number of days. This example uses a simple manual revocation. Indeed, this version is too simple to work reliably during system maintenance and upgrade, and a more sophisticated pattern is generally recommended:

In this pattern, the authority of the object and the authority to revoke are separated: you can hand the power to revoke to an object you would not trust with the authority itself. Also, the separate revoker can itself be made revocable.

Sealers and Unsealers

A sealer/unsealer pair makes it possible to use untrusted intermediaries to pass objects safely. Use the sealer to make a sealed box that can only be opened by the matching unsealer. E has built-in support for sealer/unsealer pairs:

If you hold the unsealer private, and give the sealer away publicly, everyone can send messages that only you can read. If you hold the sealer private, but give the unsealer away publicly, then you can send messages that recipients know you created, i.e., you can use it as a signature. If you are thinking that this is much like a public/private key pair from public key cryptography, you are correct, though in E no actual encryption is required if the sender, recipient, brand maker, and sent object all reside in the same vat.

While the Brand is built-in, it is possible to construct a sealer/unsealer pair maker in E without special privileges. The "shared variable" technique for making the maker is interesting, and because the same pattern appears in other places (such as the Notary/Inspector, coming up next), we demonstrate it here:

The variable "shared" normally contains the value "noObject", which is private to a particular sealer/unsealer pair and so could never the the actual value that someone wanted to pass in a sealed box. The unsealer tells the box to shareContent, which puts the content into the shared variable, from which the unsealer then extracts the value for the invoker of the unseal method. In a conventional language that used threads for concurrency control, this pattern would be a disaster: different unseal requests could rumble through, overwriting each others' shared content. But by exploiting the atomicity of operational sequences enabled by promise pipelining, this peculiar pattern becomes a clean solution for this, and several other security-related operations.

Vouching with Notary/Inspector

Suppose Bob is the salesman for Widget Inc. He persuades Alice to buy a widget. Bob hands Alice a Widget Order Form with a money-receiving capability. It is important to Bob that Alice use the form he gives her, because this particular form (which Bob got from Widget Inc.) remembers that Bob is the salesman who should get the commission. It is important to Alice that she know for sure that, even though she got the order-form from Bob, this is really a Widget Inc. order form, and not something Bob whipped up that will transfer her money directly to his own account. In this case, Alice wants to have Widget Inc. vouch for the order-form she received from Bob. She does this using an Inspector that she gets directly from Widget Inc. The Inspector is the public part of a notary/inspector pair of objects that provide verification of the originator of an object. To be vouchable, the orderForm must implement the startVouch method as shown here:

Proof of Purchase

"Proof of Purchase" is the simplest of a series of capability patterns in which the goal is not to transfer an authority, but rather to show someone that you have the authority so that they can comfortably proceed to use the authority on your behalf. In Proof of Purchase, the requester of a service is demonstrating to the server that the client already has capability that the server is being asked to use. Unlike the more sophisticated upcoming Claim Check patterns, the proof of purchase client is not concerned about giving the server an excess authority.

An interesting example of Proof of Purchase is Component.transferFocus(fromComponentsList, toComponent) found in the tamed Swing package. In standard Swing, the holder of any panel reference can steal the focus (with component.requestFocus()). Hence any keystrokes, including passwords, can be swiped from wherever the focus happens to be by sneaking in a focus change. This is a clear violation of POLA. You should be able to transfer focus from one panel to another panel only if you have references to both panels. If you have a reference to the panel that currently has the focus, then you can never steal the focus from anyone but yourself.

A natural-seeming replacement for component.requestFocus() would be requestFocus(currentFocusHolder). It was decided during the taming of Swing, however, that this would be breach-prone. If Alice transferred to Bob, not a panel itself, but rather a simple transparent forwarder on that panel, Alice could collect authorities to other panels whenever Bob changed focus.

Since we had a convenient trusted third party available that already had authority over all the panels anyway (the javax.swing.Component class object), we used the "proof of purchase" pattern instead. The globally available Component.transferFocus method accepts two arguments:

a list of panels that the client believes to contain, somewhere in their subpanel hierarchies, the focus

the target panel that should receive the focus

In the common case where a whole window lies inside a single trust realm, the client can simply present a reference to the whole window and be confident that the focus will be transferred.

The interesting thing about Component.transferFocus is that the recipient does not actually need the client's authority to reach the component that already has the focus. The Component class already has that authority. The client must send the authority merely to prove he has it, too.

Basic Claim Check

A common activity that brings security concerns into sharp focus is the delicate dance we perform when we use a car valet service. In this situation, we are handing the authority over our most precious and costly possession to a teenager with no more sense of responsibility than a stray cat. The whole valet system is a toothsome exercise in POLA.

In this example, we focus on the sequence of events and trust relationships involved in reclaiming our car at the end of the evening. The participants include the car owner, the valet service itself, and the random new attendant who is now on duty.

We have already chosen, for better or for worse, to trust the valet service with a key to the car. As the random new attendant comes running up to us, we are reluctant to hand over yet another key to the vehicle. After all, this new attendant might actually be an imposter, eager to grab that new Ferrari and cruise out over the desert. And besides, we already handed the valet service a key to the car. They should be able to use the key they already have, right?

Meanwhile, the attendant also has a trust problem. It would be a career catastrophe to serve up the Ferrari from his parking lot if I actually own the dented 23-year-old Chevy Nova sitting next to it.

In the physical world, we use a claim check to solve the problem. We present to the attendant, not a second set of car keys, but rather, a proof that we have the authority that we are asking the attendant to use on our behalf.

Basic Claim Check Default Behavior

Suppose the owner of the car loses the claim check. He can still prove he owns the car by presenting another key. In software, this situation would be comparable to the situation in which the owner hands the attendant a direct reference rather than handing the attendant merely the claim check. The car owner has violated his own security, but it is hard to visualize situations in which this is not a reasonable argument to pass to express his desire. We can cover this case with a small modification to the claimMgr's reclaim method:

What should the claimMgr return if the carOwner hands in a car reference for reclaim, if the car is not actually in the claimMgr's parking lot? The answer depends on the application, but such a situation violates the expectations of all the participants sufficiently that throwing an exception seems the safest choice.

NonTransferable Claim Check

We are far from done with claim checks. A careful car owner will not actually hand his claim check to the parking lot attendant. Rather, he will merely show the claim check to the attendant. After all, if you hand the claim check over to an imposter, the imposter can turn around and hand the claim check to a real attendant, pretending that he is the owner of the car.

The problem is somewhat different in cyberspace. The good news is, in cyberspace the attendant doesn't return a reference to the car just because you hand him the claim check. Instead, he merely performs the action you ask of him with the car. This set of actions is limited by the attendant's willing behavior. So the claim check is not as powerful a capability as the car itself. But it can still be a powerful capability. And the bad news is, in cyberspace, you can't just "show" it to the attendant. You have to give it to him.

Here we demonstrate a "nontransferable" claim check. This claim check can be handed out at random to a thousand car thieves, and it does them no good. The trick is, before handing over the claim check, the owner inserts into the claim check a reference to the individual he is treating as an attendant. The ClaimMgr compares the person to whom the owner handed the claim, with the attendant who eventually hands the claim to the claimMgr . If these two people are the same, then the owner handed the claim directly to the attendant. Otherwise, there was an intermediary party, and the request should not be honored.

Note an important implication in this pattern: the ClaimMgr winds up in a position to accumulate references to all the entities a claim check holder treats as an attendant. In this example, the ClaimMgr gets a reference to the carThief as a result of the carOwner's casual handing out of claim checks. It seems unlikely that the ClaimMgr can harm the carOwner's interests with this reference, but in other circumstances the reference may not be so harmless. To reduce our risk exposure to the alleged attendants to whom we hand the claim check, we have increased our risk exposure to the ClaimMgr.

Note also the limitations on the nontransferability of this pattern. The carOwner can still transfer authority, either by handing someone a reference to the car, or by handing out the transferableClaim from which nontransferableClaims can be made. The nontransferability is voluntarily chosen by the carOwner. You can't prevent people from delegating their authority, and even this pattern doesn't change that fact.

Oblivious Claim Check: Loan Officer Protocol

Voluntary Oblivious Compliance, or VOC, was pioneered by the Client Utility System. VOC is a field only recently recognized by the capability-security community as an important area of exploration; the claim check patterns here are early fruit of that research.

VOC is irrelevant in the classical cypherpunk view of the world, where every person is a rugged individualist making all his own authority-granting decisions with little regard for other people's issues. In a world filled with corporations, governments, and other policy-intensive organizations, however, it is an area of real value even though it is not really a security matter. Why is it not security? We consider it beyond the scope of "security" for a simple but compelling reason: it is not enforceable. Unenforceable security in cyberspace has been proven, over and over again in the course of the last decade, to be a joke played on all the participants. VOC states in its very name the limits of its applicability: it only works with volunteers. Don't be fooled into thinking this is security.

Let us consider a somewhat different problem. Suppose that different attendants are trusted to handle different cars by the valet service. One valet has a motorcycle license and parks all the Harleys. Another has a multi-engine pilot's license and parks all the Boeing 747s. Of course, the one with the motorcycle license is a teenager who has always wanted to try his hand at parking a 747, and knows his lack of experience is not a problem. In this situation, each attendant has a different set of authorities at his command; just because you hand your claim check to a legit attendant doesn't mean the valet service thinks it would be a good idea to let that attendant drive your vehicle. A more generalized way of stating the problem is, in this case the authorities of the individual receivers of the claim checks vary, and management of the individual receiver's authorities is beyond the scope of what the ClaimManager should be trying to figure out. After all, the individuals know all their own authorities; it would be poor design (and unmaintainable at scale) for the ClaimManager to try to duplicate this information.

This situation, while not a part of the real-world valet service problem, has a significant area of application in cyberspace. This area is Voluntary Oblivious Compliance, or VOC. Let us consider a more sensible example. Alice works for HPM, and Bob works for Intil. HPM and Intil are often competitors, but Alice and Bob are working on a joint project from which both companies expect to profit handsomely. The Official Policy Makers of HPM have identified a number of documents which can be shared with Intil, and indeed have forwarded to Intil references to the allowed docs. Alice wants to refer Bob to a particular HPM document, but only if sharing the document is allowed under the HPM policy. In this case, the VOCclaimCheck that Alice sends to Bob demands that Bob demonstrate to HPM's ClaimMgr that he already has authority on the document before fulfilling the claim. To prove his authority, Bob sends to the ClaimMgr all the HPM doc authorities he has (i.e., the list of docs HPM handed to Intil). Only if both Alice and Bob already have authority on the object does Bob get the document. This is sometimes called the Loan Officer Protocol, in reference to the old adage that a loan officer will not give you the loan unless you first prove that you don't need it.

Since bob can't get the reference unless he proves he doesn't need it, we can now see why this is a pattern of Voluntary Oblivious Compliance. It is voluntary, because Alice could just send Bob the document and circumvent the system. And it is oblivious, because Alice doesn't need to know whether a document can be shared with Bob before sending it.

Going back to the parking lot attendant who wants to park the 747, he has to demonstrate to the ClaimManager that he has the keys to the 747 before the Claim Manager will let him go for it. The owner of the 747 is much relieved.

Humorous as the 747 example might be, we will now switch to the scenario with Alice, Bob, HPM, and Intil for our sample code. The basic strategy is that the ClaimMgr not only examines the claim check from Alice, it also examines the list of authorities that Bob hands over to see if any of Bob's authorities match the one inside Alice's claim.

The oblivious claim check pattern can require scattering even more authority to the winds than the nontransferable claim check. The recipient of the claim check (bob) needs to have all the authorities that alice might give to him, rather than merely the ones she actually does give to him. And the trusted third party who matches the claim check against the list of possibilities (the HPMDocClaimMgr) gets to accumulate authority to everything that bob thinks alice might be trying to send him. In the example scenario, this is just fine. But some care is required.

Oblivious Claim Checks as Guards

An elegant style of using VOC claim checks is to set up the pattern as a pair of guards. Alice would use the ClaimCheck guard to send a reference to Bob, and Bob would use the CheckedClaim guard, with the list of candidate references, to receive the authority. We will show the implementation of the BuildGuards ClaimCheck and CheckedClaim guards in Advanced Topics when we talk about writing your own guards, but their usage would appear as follows:

The ClaimCheck guard coerces the doc into a sealed representation of itself. The CheckedClaim guard unseals the claim and matches it against the authorities in the intilSharedDocs list. As a guard, it is better style to return a broken reference than to throw an exception if something goes wrong. In the parameter guard, if an exception were thrown, the recipient would have no chance to do anything with the problem, the exception would be thrown directly back to the sender before the recipient was even aware there was an issue.

Oblivious NonTransferable Claim Check

It is possible to make nontransferable oblivious claim checks as well. We leave the code as an exercise for the reader.

Powerbox Capability Manager

The powerbox pattern collects diverse elements of authority management into a single object. That object, the powerbox, then becomes the arbiter of authority transfers across a complex trust boundary. One of the powerbox's most distinctive features is that it may be used for dynamic negotiations for authority during operation. The less trusted subsystem may, during execution, request new authorities. The powerbox owner may, in response to the request, depending on other context that it alone may have, decide to confer that authority, deny the authority, or even grant the request authority after revoking other authorities previously granted.

The powerbox is particularly useful in situations where the object in the less trusted realm does not always get the same authorities, and when those authorities may change during operation. If the authority grant is static, a simple emaker-style authorization step suffices, and a powerbox is not necessary. If the situation is more complex, however, collecting all the authority management into a single place can make it much easier to review and maintain the extremely security-sensitive authority management code.

Key aspects of the powerbox pattern include:

A powerbox uses strict guards on all arguments received from the less trusted realm. In the absence of guards, even an integer argument received from untrusted code can play tricks: the first time the integer-like object (that is not really an integer) is asked to "add", it returns the value "123"; but the second time, it returns the value "456", with unpredictable (and therefore insecure) results. An ":int" guard on the parameter will prevent such a fake integer from crossing into your realm.

A powerbox enables revocation of all the authorities that have been granted. When you are done using a less trusted subsystem, the authorities granted to it must be explicitly revoked. This is true even if the subsystem is executing in your own vat and you nominally have the power to disconnect all references to the subsystem and leave the subsystem for the garbage collector. Even after being severed in this fashion, the subsystem will still exist for an unbound amount of time until the garbage collector reaches it. If the authorities it has received have not been revoked, it can surprise you with its continued operations, and continued use of authority. Not all kinds of objects in the Java API can be made directly revocable at this time, because an E revocable forwarder cannot be used in all the places where an actual authority-carrying Java object is required. For example, the untrusted object may want to create a new image from an url. The natural way of doing this would be to call the Java Icon class constructor with (url) as the argument. But if an E forwarder were handed to this class constructor, the constructor would throw a type exception. Different solutions to such impedence mismatches with the Java type system are required in different situations. For the icon maker, it might be acceptable to require that the untrusted object read the bits of the image from the url (through a revocable forwarder) and then convert the bits into an icon.

A new powerbox is created for each subsystem that needs authorities from within your trust realm. If a powerbox is shared across initiations of multiple subsystems, the powerbox may become the channel by which subsystems can illicitly communicate, or the channel by which an obsolete untrusted subsystem can interfere with a new one. When the old untrusted subsystem is discarded, its powers must all be revoked, which necessarily implies that a new subsystem will need a new powerbox.

In the following example, the less trusted object may be granted a Timer, a FileMaker that makes new files, and a url. The object may request a different url in the course of operations, in which case the powerbox will ask the user for authorization on the objects's behalf; the old url is revoked, and the new one substituted, so that the object never has authority for more than one url at a time. The object that operates the powerbox may revoke all authorities at any time, or it may choose to revoke the Timer alone. Finally, the operator of this powerbox may, for reasons external to the powerbox's knowledge, decide to grant an additional authority during operations, an authority whose nature is not known to the powerbox.

Both the file facet and the url facet restrict the user to methods that return immutables. If these facets returned mutables, like streams, then those mutables would also have to be wrapped in revokable forwarders. In the general case, this requires the use of a membrane. write section on membranes, and link

But there is an proplem: The forwarder leaks authority by allowing ocaps through without wrapping them in revocable forwarders.
What if each revocable forwarder wrapped the calls arguments and return value?

Petnames and Forgery among partially trusted participants

E's encrypted authenticated references ensure that random attackers get no traction trying to break into your distributed system. As mentioned in the earlier minChat audit, this means that all interesting break-ins will be made by "insiders", i.e., people who have been granted some capabilities inside your system. We have talked so far about low-level attacks that attempt to acquire or use inappropriately conveyed authority. At a higher conceptual level, people can try to present themselves as someone else and get your user to intentionally, but erroneously, grant other information or capabilities.

minChat dodged this problem because it is a 2-person chat system: the person at the other end of the line will have difficulty convincing your user he is someone else, because your user knows the one and only person given the URI (because he used PGP encryption as described earlier to ensure that only one person got it). But in a multi-person chat system, Bill could try to pretend to be Bob and lure your user into telling him Bob's secrets.

There is no single all-purpose solution to this risk. However, one critical part of any specific solution must be this: Do not allow the remote parties to impose their own names for themselves upon your user.

The general solution is to use petnames, i.e., names your user chooses for people, regardless of how those people might name themselves. Of course, it is reasonable to allow the other people to propose names to your user (called nicknames), but the user must make the final decision. The user must also be able to change her mind at a later date, when her list of participants grows to include not only Bob Smith but also Bob Jones, and she must disambiguate the pet name "Bob" she had for Smith.

Example: Satan at the Races

Earlier we presented a single-computer racetrack game as an example. Here we present a distributed version of Racetrack, allowing people to compete over the network. And, just for a bit of spice, we have thrown in the assumption that one of the drivers will be Satan. Satan will, of course try to win...or try to wreck other people's cars (keeping his own safe, of course)...or try to ensure that someone Satan favors will win, presumably someone who has offered up his soul in exchange for victory.

Fortunately, as we all know from the story of the time the Devil came down to Georgia, as long as we can force Satan onto a fair playing field, he can indeed be beaten. We will need scrupulous capability security, however, to keep that field fair.

As noted earlier, security within the E framework is largely a matter of architecture. Therefore we will look at the issues for a secure distributed architecture before we look at the code.

First of all, where do we want to put the divide between the server functionality and the client functionality? If the author of this book owned stock in a diskless workstation company, we would undoubtedly present a "thin client" architecture, wherein everything except user interface code lived on the server. We will look at other possibilities here, however, since other possibilities abound.

E allows us to distribute the computation to suit our needs. We could build a thin client, a thin server, or anything that lies between these extremes. The particular architecture strategy we will use for this security example, with our souls on the line because Satan is coming to the party, is a "thin network" layout. We will minimize the amount of information being sent back and forth between the server (owned by the trusted third party who authoritatively asserts who won the race) and the clients (owned by the race car drivers, including Satan). We will especially minimize the number and power of the references (capabilities) the clients have on objects on the server, since this is the major path Satan will have for attacking the system.

With this as the architectural goal, we can reduce the information flow from client to server to merely proposals for the acceleration changes for the cars. Each driver (i.e., each client) should have access to the server only to the extent of being able to tell their respective cars on the server what acceleration the driver proposes. The clients must absolutely not have access to any cars on the server but their own. Consequently, a method like, "setAccelerationForCar(name)" cannot be the way car directions are specified: Satan, upon learning the names of the cars, could easily start specifying accelerations for everyone's cars, not just his own.

The server will initially send out maps to the clients: rather than giving the clients read authority on the raceTrack's map, the server will send each client an immutable pass-by-copy description of the map and the car starting locations. Sending out immutable copies that are locally accessible will simplify the code, reduce the communication needed during the actual game, and most significantly, will eliminate a set of interfaces that would need security audits.

As in the single-computer version of the game, the raceTrack will collect accelerations input by the drivers until it has accelerations from all the drivers, then it will authoritatively describe those accelerations to all the drivers. Note the careful language we use to describe the accelerations being sent around the system: driver clients send proposed accelerations for their own accelerations to the server; the server sends authoritative accelerations to the clients. Satan's easiest technique for assaulting the track would be to simply modify his client so he can specify accelerations beyond -1,0,+1; it would be a substantial advantage for Satan to be able to accelerate +15 in a single turn. So the server must validate the accelerations, and not rely on validation in the user interface as the single-computer version did.

As noted earlier when discussing pet names, another clever attack is to forge someone else's signature. If Satan could pretend to be whichever driver actually wins the race, and persuade people to believe him, it would be as good as actually having won (and quite a bit sweeter as well). This racetrack does not use the full-strength pet name strategy to prevent this. Rather, the server assigns each car a name. No negotiation is allowed.

So far, we have only considered attacks that can be made from the client on the server. Are there any security concerns going the other way, i.e., secret information that, if acquired by Satan, would improve his situation? If this were a game of poker, not a game of racetrack, the answer would be yes, and we would have to audit the data being sent to the client as well as the capabilities. For racetrack, however, this is not an issue.

What else can Satan do to sabotage the race? He can, of course, flood the server with spurious messages in a denial of service attack. As stated earlier, E by itself cannot prevent this. But for this critical game, we can assume the trusted third party operating the server will work with the ISPs to prevent it from inhibiting game play. There is one other denial-of-service attack Satan can undertake, the simplest attack of all: he can simply walk away from his computer, leaving the game with one car that never gets new acceleration instructions. In the single-computer version of racetrack, the game never stepped forward to the next turn until all the cars had new instructions. Left unchanged, this would allow Satan to starve all the drivers to death and then claim victory. Consequently, we must put a time limit into the distributed version, so that game play continues even if one of the players loses his soul in the middle of the competition.

That completes the security-oriented architectural considerations for the racetrack. One other architectural note: The cars will be implemented using the unum design: each client system will have a client presence of the car locally instantiated, synchronized in each game turn with the host presence of the car that resides on the server. As noted briefly earlier, the unum is a supercharged version of the Summoner pattern; in the Summoner pattern synchronization is not performed. Before this becomes cast in stone, I don't love "host presence". "Authoritative presence" is too long. How 'bout "essence"? Or "true presence"?