Inhere Inherent Information Survivability
1. (Slide)
2. Information Survivability is a difficult, multidimensional problem. Historically,
research has focused on the construction of high assurance, secure systems that prevented
access by unauthorized users. No defense is perfect however, and the reality is that some
attacks will succeed in penetrating such barriers, no matter how well they are constructed.
DARPA is pursuing a layered strategy for Information Survivability that views security
barriers as only the first line of defense. The second layer of defense is to efficiently
detect and identify those attacks that succeed in breaching the barriers, whether the attack
is a local penetration or a coordinated information warfare campaign. Finally, some
attacks will succeed at compromising system assets before they can be detected and
contained. Mission-critical systems must be able to tolerate attacks and continue
operation while under attack.
DARPA is developing and integrating technologies across all three of these layers.
3. DARPA’s efforts in this area are coordinated across programs in two offices.
Historically, the Information Technology Office (ITO) has been charged with developing
innovative solutions to address critical technology gaps in the layered defense model,
while the Information Systems Office (ISO) has been concerned with integrating these
technologies into a coherent information assurance architecture providing balanced
protection. Under the reorganization, much of the technology development will be
consolidated in ISO.
4. DARPA’s long term investment strategy in this area began with the ITO Information
Survivability Program, which is now nearing completion.
Information Survivability focused on strengthening barriers to penetration and advanced
local intrusion detection.
ISO launched its Information Assurance program two years later to integrate technologies
emerging from Information Survivability into a comprehensive architecture.
DARPA is now entering the second phase of its technology development strategy with a
program called Inherent Survivability. This program will focus on detecting and
assessing large scale attacks and developing technologies supporting intrusion tolerance.
5. Let me begin by summarizing the accomplishments of the Information Survivability
Program.
6. The goal of the first thrust of the Information Survivability Program was to develop
strong barriers to penetration appropriate for modern networked computing
environments. These environments are characterized by strong connectivity, distributed
applications, and heavy reliance on commercial off-the-shelf (COTS) software and
standards. In keeping with the layered defense model, barriers were developed at several
system levels.
7. As an example of our accomplishments in network security, we have developed
technologies for securing the Internet’s Domain Name Server (DNS) infrastructure.
DNS is a hierarchically organized collection of servers that maintain mappings between
domain and host names and numerical Internet addresses.
Securing this important information against potential compromise entailed the
development of technology to efficiently authenticate transactions involving name-
address mappings between DNS components.
This technology is now undergoing standardization through the Internet Engineering
Task Force (IETF).
8. Middleware, a software layer interposed between applications and operating systems
that allows applications to be distributed across multiple hosts of potentially different
types, is becoming an increasingly important paradigm for distributed computing.
Current technology, however, does not support extending such distributed applications
across enclaves protected by firewalls and governed by different security policies.
This project focused on security enhancements to the popular Common Object Request
Broker (CORBA) middleware standard. A fine-grained security policy allows access to
be controlled down to the invocation of specific operations (methods) on objects. Access
is mediated by a firewall-like gateway, which also authenticates external requests.
Finally, requests are passed on to a secure Object Request Broker (ORB) inside the server
enclave.
9. As an example of accomplishment in operating system security, consider the Fluke OS
project at the University of Utah. The central idea of Fluke is the Nested Process Model
in which a parent process has strict control over the resource consumption of its children.
The innovation in Fluke is that this is done very efficiently, allowing the imposition of
security functionality where the performance penalty might otherwise be considered
prohibitive.
In the example, a security manager process is used to encapsulate the behavior of an
untrusted application, while a trusted one is allowed to interact directly with the process
manager.
10. The Information Survivability Program also pioneered the concept of wrapper
technology for inserting security at the application level. This technology addresses
concerns raised by the incorporation of legacy or commercial off-the-shelf applications
into otherwise secure systems.
Wrappers are a software layer transparently interposed between the application and
operating system. Inputs to the application can be intercepted for the purpose of filtering,
transformation or monitoring and outputs can be similarly subject to a variety of actions
before release.
11. The second theme of the Information Survivability Program was to develop advanced
methods for detecting local intrusions. The goal was to achieve a high probability of
detection while minimizing the false alarm rate.
12. The state-of-the practice in intrusion detection relies on pattern matching against
known attack “signatures”. This method is incapable of detecting previously unknown
attacks and typically achieves only 20% detection at a very high 10% false alarm rate.
The Information Survivability program focused on augmenting signature-based schemes
with more general methods that attempt to capture “normal” system behavior and detect
deviations from that as possible indicators of penetration. Approaches include detecting
statistical anomalies in network traffic and profiling of an application’s system calls to
build a model of typical operation.
13. DARPA’s intrusion detection projects are currently undergoing evaluation through
activities at MIT Lincoln Laboratories and the Air Force Research Laboratory. The
performance of intrusion detection systems may be characterized by so-called receiver
operating characteristic (ROC) curves, which capture the fundamental trade-off between
the probability of detection and the false alarm, or false positive, rate.
The sample results shown in this slide indicate that DARPA intrusion detection
technologies performed very well on attacks that compromise root privileges, achieving
about 80% detection at acceptable false alarm rates.
14. The efforts also performed well on detecting “new” attacks versus “old” or previously
encountered ones in two of the categories: probe and root access.
The detection schemes performed less well on the problematic denial of service (DoS)
attacks and on attacks where a remote adversary does not acquire root privileges.
15. Building on the successes of the Information Survivability program, DARPA is
launching its follow-on program, called Inherent Survivability, this year.
Inherent Survivability is focused on global intrusion detection and intrusion tolerance.
16. Our goal in global intrusion detection is to identify large-scale attacks by correlating
information across local detectors and enable effective response at the appropriate level
of the system hierarchy.
17. Detecting large-scale attacks requires coordination of many local intrusion detectors.
A starting point for this coordination activity is the Common Intrusion Detection
Framework (CIDF) being defined by a working group established under the Information
Survivability program in 1997. CIDF supports basic interoperability of intrusion
detection systems through a common language for describing intrusion events and
protocols for exchanging reports between detectors. The CIDF work has already led to a
standards activity under the Internet Engineering Task Force (IETF).
18. Under Inherent Survivability, CIDF will be extended with richer functionality
allowing detectors and analysis engines to share information and local detectors to adapt
their behavior based on global context information, e.g., by filtering reports, tuning
parameters, or activating additional sensors.
It is of vital importance to do as much filtering as possible to ensure that events of higher
level significance are reported up the hierarchy.
19. The Inherent Survivability program will develop technologies to support intrusion
tolerance at both the system and network level.
The goal of the Intrusion Tolerant Systems component is to maximize the ability to
continue critical operations following partial compromise of the system. In contrast to
prevention technologies, which tend to emphasize confidentiality (access control,
authorization, authentication), intrusion tolerance technologies emphasize integrity and
availability.
In particular, this component is developing technologies to support data integrity and
mobile code integrity. It is also exploring innovative methods to increase the robustness
and resilience of software against compromise.
20. Data object integrity refers to a verifiable lack of malicious tampering with digitally
sampled information (such as image, audio, video) from the time of its generation
through various transformations that may be applied. We envision a scenario such as that
depicted here. The device that creates the object will contain a trusted component that
will build an integrity mark into the object at the time of creation, which can be used to
verify the origin and integrity of the object. Legitimate processes that perform
transformations or enhancements to the object will add an additional integrity mark that
may be composed of a list of transformations, a signature, and perhaps other data. This
collection of integrity marks should allow the integrity of the object to be verified and
any tampering to become evident anywhere along the way.
21. Mobile code is emerging as the dominant programming paradigm in networked
systems. From Java applets to agent-based systems to executable content and network
distribution of software, it is no longer possible to completely trust the integrity of code
being executed on our systems or even be aware of its existence. As agents are
dispatched to remote systems to do work on our behalf we are likewise unable to verify
the integrity of the results returned.
Inherent survivability is addressing both aspects of this problem. In the case of verifying
the integrity of code from an incompletely trusted source, a promising approach being
pursued is that of Proof-Carrying Code (PCC), originally developed at Carnegie Mellon
University.
In PCC, the code producer is required to submit a mathematical proof that the code
conforms to a safety or security policy dictated by the code consumer. The proof is
easily and quickly checked by the code consumer prior to accepting the code for
execution.
While the feasibility of this concept has been shown, much work remains to address
practical issues of scalability, efficiency, and expanding the scope of the policies that
may be imposed on the code (currently limited to memory safety).
22. Many software vulnerabilities are at the component interfaces. Current interface
technologies specify the number and syntax of parameters but fail to capture semantic
information such as the importance of each parameter, what precision is necessary, etc.
The goal of the Tolerant Software thrust is to develop innovative software interfacing
technologies that, by analogy with the concept of “tolerance” in mechanical part
assembly, allows components to interact productively, even when there are minor
mismatches in expectations. Such technology will permit a software component to
tolerate attacks and make progress even when components on which it depends are
compromised or unavailable.
23. The increasing reliance on commercial off-the-shelf (COTS) operating systems and
software in Defense systems poses a risk. Any vulnerability in such a widely deployed
system will be replicated across thousands of sites.
One approach to mitigating these shared vulnerabilities is to artificially introduce
controlled diversity into these systems. An attacker, unaware of the exact configuration
confronted will be unlikely to succeed.
As an example, consider the buffer overflow attack, which exploits a common
vulnerability in many systems. The buffer overflow attack works by invoking a system
call with a very long string that overwrites the return address on the stack and installs the
attacker’s own code. When the system call tries to return, it jumps to the attack code
instead.
A novel solution to this problem was developed at the Oregon Graduate Institute and
implemented in a tool called StackGuard. The system call is modified to insert a random
string on the stack between the user-accessible buffer and the return address. This
random string is called a “canary” since it serves as an early warning of danger.
The buffer overflow is not prevented, but any attempt to overwrite the return address will
also destroy the canary. Prior to returning, the system code verifies the integrity of the
canary, raising an alarm if it has been modified. The attack code is not executed.
24. Intrusion Tolerance is also being addressed at the network level. The goal is to
maximize the residual capacity of the network infrastructure following attack. This
involves both ensuring availability of the network to legitimate users while constraining
resources available to the attacker.
25. Innovative approaches to mitigating denial-of-service attacks, by constraining the
resource consumption of an attacker (or potential attacker) will be developed.
Concepts to be explored include market-based resource allocation, which controls
consumption through an artificial economy, and progress-based protocols, which link
allowable resource consumption to the level of trust established, e.g., through a series of
increasingly strong authentication challenges.
26. Active networking technology can build new networking structure for survivability,
on a dynamic basis. A difficult problem for system administrators is to quickly identify
the sources of attacks in order to stop the attack close to its source. Active networks can
deliver “traceback” capability, inverting the routing path selectively and delivering
preventative technology to the uncompromised sites nearest to the attack.
27. In summary, DARPA has developed focused technologies supporting the layered
defense strategy under the Information Survivability program and will continue to do so
under the follow-on Inherent Survivability program.
Coordination with ISO’s Information Assurance program ensures that technologies fill
system-level needs and fit into a coherent, balanced architecture.