Though we could in principle use any of several mobile code
technologies, we will base our analysis on the properties of Java.
Java is a good choice for several reasons: it is widely used and
analyzed in real systems, and full source code is available to
study and modify.

The sandbox model is easy to understand, but it prevents many
kinds of useful programs from being written. All file system access is
forbidden, and network access is only allowed to the host where the
applet originated. While untrusted applets are successfully prevented
from stealing or destroying users' files or snooping around their
networks, it is also impossible to write a replacement for the users'
local word processor or other common tools which rely on more general
networking and file system access.

Traditional security in Java has focused on two separate, fixed
security policies. Local code, loaded from specific directories on
the same machine as the JVM, is completely trusted. Remote code,
loaded across a network connection from an arbitrary source, is
completely untrusted.

Since local code and remote code can co-exist in the same JVM, and can
in fact call each other, the system needs a way to determine if a
sensitive call, such as a network or file system access, is executing
``locally'' or ``remotely.'' Traditional JVMs have two inherent
properties used to make these checks:

Every class in the JVM which came from the network was
loaded by a ClassLoader, and includes a reference to its
ClassLoader. Classes which came from the local file system have a
special system ClassLoader. Thus, local classes can be distinguished
from remote classes by their ClassLoader.

Every frame on the call stack includes a reference to
the class running in that frame. Many language features, such as
the default exception handler, use these stack frame annotations
for debugging and diagnostics.

Combined together, these two JVM implementation properties allow the
security system to search for remote code on the call stack. If a
ClassLoader other than the special system ClassLoader exists on the
call stack, then a policy for untrusted remote code is applied.
Otherwise, a policy for trusted local code is used.

To enforce these policies, all the potentially dangerous methods in
the system were designed to call a centralized SecurityManager class
which checks if the action requested is allowed (using the mechanism
described above), and throws an exception if remote code is found on
the call stack. The SecurityManager is meant to implement a reference
monitor [25,32] -- always invoked, tamperproof, and
easily verifiable for correctness.

In practice, this design proved insufficient. First, when an
application written in Java (e.g., the HotJava Web browser) wishes to
run applets within itself, the low-level file system and networking
code has a problem distinguishing between direct calls from an applet and
system functions being safely run on behalf of an applet. Sun's
JDK 1.0 and JDK 1.1 included specific hacks to support this with
hard-coded ``ClassLoader depths'' (measuring the number of stack
frames between the low-level system code and the applet code).

In addition to a number of security-related bugs in the first
implementations [8], many developers complained that the
sandbox policy, applied equally to all applets, was too
inflexible to implement many desirable ``real'' applications.
The systems presented here can all distinguish between different
``sources'' of programs and provide appropriate policies for each
of them.