We describe a cause-effect model of information system attacks and
defenses based on the notions that particular threats use particular attacks
to cause desired consequences and successful defenders use particular
defensive measures to defend successfully against those attacks and thus
limit the consequences. Human defenders and attackers also use a variety of
different viewpoints to understand and analyze their attacks and defenses,
and this notion is also brought to bear. We then describe some analytical
methods by which this model may be analyzed to derive useful information
from available and uncertain information. This useful information can then
be applied to meeting the needs of defenders (or if turned on its head
attackers) to find effective and minimal cost defenses (or attacks) on
information systems. Next we consider the extension of this method to
networks and describe a system that implements some of these notions in
an experimental testbed called HEAT.

A Cause-Effect Model of Information System Attacks and Defenses:

There is a common notion of cause and effect that has
been debated in philosophical terms many times over the ages. Some
believe in fate - the happenings of the universe are predetermined.
Some believe in chance - God plays dice with the Universe. Regardless
of the underlying nature of things, as a fundamental assumption to
further work in this area, we take the position that the world works
through a system of causes, mechanisms, and effects. Thus we have the
picture of systems shown in figure 1.

In this depiction of our assumption, we assert that
Causes (also called threats) use Mechanisms (Previously published under
the name Attacks and also called Attack Mechanisms) to produce Effects
(also called consequences). Protective Mechanisms (also called
Defenses) are used to mitigate harm by acting to limit the causes,
mechanisms, or effects.

Our schemes of describing Causes, Mechanisms,
Effects, and Defenses, are based on the collection of sets of specific
causes, mechanisms, effects, and defenses into names groupings, but
underlying each of these sets there are specific actors, mechanisms, and
consequences that could be analyzed at a detailed level. Since the sets
we describe are not strictly classification schemes, there are
many-to-many mappings between specifics and sets in our
descriptions.

The viewpoints in our model represent the notion that
there may be many properties of elements of our model that can be
related, analyzed, and used by people and by automation to deal with the
issues of information protection.

We believe that the value in this particular model
lies in two areas; the reduction of complexity from a model based on all
of the specifics allows meaningful computations to be performed in
reasonable times, while the increased differentiation over simplified
models containing only a few items (e.g., corruption, denial of
services, information leakage) allows useful results to be derived.
This belief will, we hope, be justified by the analyses we are able to
perform.

We also feel it is important to note that this is not
the only model of this sort available today. Many other authors have
tried to form similar models and we have borrowed freely from them.
Some of the models we have considered in our efforts include John
Howard's model, the other models he cited and analyzed in his work, and
Donn Parker's Model, particularly in the area of consequences.

The specific mappings used in our analysis vary with
time, in large part because attackers and defenders are always learning, and
in smaller part because new mechanisms are being discovered over time. We
also find new ways of correltating issues over time, and this adds new
viewpoints. The current viewpoints include:

The size of the mapping today is on the order of 37x94/3
links from threats to attacks, 94x140/3 links from attacks to defenses, and
28x250/3 links between these items and the viewpoints. The actual count
today is about 7,000 links which is stored as a numerical table for analysis
purposes. While it is impractical to provide that entire table here, it can
be attained from the principal author in electronic form and can be viewed
as a linked database on the Internet at http://all.net/

Some Analytical Methods
Applicable to the Cause-Effect Model:

Given the above model of cause and effect, we have
generated a set of analytical techniques that enable us to do three
things:

Based on the work on medical diagnosis that forms
the basis of the Mycin system, we have created an indications capability
that predicts causes and correlated mechanisms based on a detected set
of mechanisms.

Based on the notion of covering table, we
have analyzed methods for determining an optimal cover for defending
against a set of causes.

Based on syntax and semantics theory, we
have created a linguistic problem and solution generator that produces
sets of feasible attack and defense scenarios based on a user-specified
linguistic question.

Indications and Warnings Analysis:

For indications and warnings it is desirable that,
based on the information available, a prediction be produced that (1)
indicates the possible causes of the observed information and ranks
those causes in order of confidence that they could be a cause, (2)
based on those indications, predicts other observable mechanisms and
consequences associated with those causes, and (3) provides the means to
warn potential defenders and victims of consequences prior to those
consequences.

The analysis we perform is based on analysis of a set
of observed mechanisms. Given a set of mechanisms that are known or
thought to be in use, we reverse the cause-effect model to produce a
ranking of all causes that could be associated with each of the observed
effects. This ranking is based on the confidence level associated with
each observed phenomena and the correlation between capabilities and
characteristics of the causes and observed mechanisms.

In the current implementation, a selection of
mechanisms is made from a menu of checkboxes using a Web browser. For
example, we might select the following set of observed mechanisms:

Club initiates, organized crime,
activists, and customers are less likely still, followed by deranged
people, vandals, infrastructure warriors, drug cartels and
police.

Finally, these mechanisms have no
correlation with known capabilities and intent of hoodlums, nature,
paramilitary groups, terrorists, or whistle blowers.

Given the resulting metric associated with each
cause, we then produce an aggregate metric for each of the mechanisms
within the capability and characteristics of those causes. The result
is a metric associated with each mechanism indicating how closely it
matches the observed phenomena within the context of the model. For the
example above:

Given the resulting metric associated with each
mechanism, we then produce an aggregate metric for each of the
consequences that can result from each of those mechanisms. The result
is a metric associated with each consequence indicating how closely it
matches the observed phenomena within the context of the model.

These results have been used for several different purposes:

The metrics on mechanisms are used to determine
optimal selection of sensors to differentiate or detect while limiting
impact on normal operations.

The metrics associated with attack
mechanisms are used as weightings in the covering analyses described
above to optimize the design or selection of flexible defenses so as to
minimize the effectiveness of further attacks within a budget.

The analysis of consequences has been
extended (as described later) to a networked environment in the form of
analyzing the impact of an acquired level of access to one system on the
spread of an attack from system to system.

Covering Analysis:

The covering analysis we used is based on the
covering analysis used in optimization. The notion of covering analysis
is that we have a set of attacks and a set of defenses, where each
attack has a metric indicating the relative consequence of its use, each
defense has a metric indicating the relative cost of its use, and a
covering table indicates whether or to what extent each defense
mitigates the consequences of each attack. The analytical challenge is
to find an efficient mathematical method for determining an optimal
selection of defenses. Optimality can be determined against a set of
different measures to produce different results. We have examined two
particular measures; (1) minimize the cost for achieving a particular
total level of consequence, and (2) minimizing the consequence given a
particular budget for defense.

The classic set covering problem has been widely
studied because of it's use in airline crew scheduling. We can
formulate the defense-allocation problem as a generalization of the set
cover problem. Table entries represent the probability that a
particular defense can prevent a particular attack. If we assume attack
consequences are independent and defense probabilities are independent,
then we can calculate the expected consequence from the full set of
attacks.

Although this computation is significantly more
complicated than classic set cover, we can modify much of the previous
work on set cover for this setting. In particular, since set cover is a
restrictive special case of the defense-allocation problem, hardness
results for set cover apply directly. Thus theoretically, we cannot
approximate the best defense set to within a factor of log d,
where d is the number of defenses. However, exhaustive methods
may be practical for the size of instances generated by small-to-medium
sized networks. We have generalized the greedy iterative methods which
are asymptotically optimal for set cover. It is also possible that they
remain asymptotically optimal in the more general setting. If we also
specify a target consequence for each attack rather than all attacks
combined, this problem can be modeled as an integer program similar to
the set-cover integer program, and therefore, one can apply a
generalized versions of the vast number of exact and heuristic methods
tuned for this application.

An implementation of covering analysis based on the
greedy algorithm is currently implemented. Based on the above example,
and with the additional constraints that we wish only to detect
corruption and do so with only commercially available mechanisms, we get
the following list of potential detection defense mechanisms:

known-attack scanning

program change logs

time, location, function, and other similar access limitations

filtering devices

searches and inspections

redundancy

deceptions

procedures

auditing

audit analysis

augmented authentication devices (time or use variant)

security marking and/or labeling

classifying information as to sensitivity

document and information control procedures

The covering analysis indicates that these methods
have the potential of detecting all of the identified mechanisms,
indicates that the defenses with the best coverage are (1) time,
location, function, and other similar access limitations, (2)
redundancy, and (3) filtering devices and that in combination, these
cover almost all of the identified attack mechanisms. The detailed
results of the covering table indicate which classes of defense
mechanisms cover which classes of attack mechanisms and use coloring to
indicate uncovered attack mechanisms.

Linguistic Analysis:

The third extension is to describe techniques that
can be used to analyze systems based on this model. In this extension,
we treat the set of all cause, mechanism, effect, defense, and viewpoint
sequences as the set of legal statements within a language. We then
take a user-specified sentence in the form of menu selections from the
set of possible causes, mechanisms, effects, defensive capabilities, and
viewpoints, and produce all valid sentences within the language that
meet those specifications. A simple example is the input phrase
(selected from menus):

Known-attack scanning can be used for detection or prevention or reaction.

Redundancy can be used for detection or prevention or reaction.

Searches and inspections can be used for detection or reaction."

The Extension of These Results to Networked Systems:

The fourth extension is to use these techniques to
analyze networked systems. In this analysis, we generate an attack
graph by identifying the vulnerabilities of each of a set of systems
within a network and characterizing the way in which those systems can
communicate. The attack graph is, in essence, the set of all possible
sequences of mechanisms that an attacker can use to achieve a particular
goal within a network. We analyze attack graphs and defense postures
(sets of defensive mechanisms that can be placed in the network) to (1)
increase the minimum cost of attack within a fixed budget for defense
and (2) minimize the cost of defense for a given attack budget. We then
do an analysis of this graph to find, for example, a minimum cost cut to
the attack graph where cuts are used to represent the effect of
protective mechanisms.

The attack graph method is flexible, allowing
analysis of attacks from both outside and inside the network. It can
analyze risks to a specific network asset, or examine the universe of
possible consequences following a successful attack. The analysis
system requires a database of common attacks broken into atomic steps,
specific network configuration and topology information, and an attacker
profile. The attack information is then "matched" with the
network configuration information and attacker profile to create a
"superset" attack graph. Nodes in this graph identify a stage
of attack (e.g., the class of machines the attacker has accessed and the
privilege levels compromised). Arcs in this graph represent the paths
through which mechanisms can be used to change the stage of attack. By
assigning costs representing level-of-effort for the attacker (or
alternatively probabilities of success) to the arcs, graph analysis
algorithms such as shortest-path algorithms can be used to identify the
attack paths with the lowest cost (or highest probability) of success.
Defense postures (i.e., sets of defensive mechanisms that can be placed
in the network) can also be analyzed for their impact on cost (or
probability) to increase the minimum cost (or maximum probability) of
attack within a fixed budget for defense or minimize the cost (or
probability) of successful defense for a given attack budget.

Once the attack graph has been generated, we can also
apply analysis methods to determine high-risk attack paths. The graph
may also be used to run simulations of attacks and defense both in a
batch mode for optimizing a set of fixed defenses or in a real-time mode
for predicting the impacts of future attacks given a current situation.
This then forms the basis for simulation components of a proposed
capability for model-based situation anticipation and constraint for
flexible defenses as well as a potential method for use in indications
and warnings against information systems and networks.

The CID System:

An automated system (named CID) demonstrates these
analyses in useful application. CID is integrated with a set of
specially configured hardware, software, and rooms and the HEAT computer
network to provide a cyber-warfare experimentation, analysis, training,
and gaming environment called the Cyberwarfare Center. In order to
understand the role of CID in this environment, we will begin by
describing the environment and its uses.

HEAT is a computer network located at Sandia National
Laboratories in California. It was originally designed for experimentation
with heterogeneous MIMD parallel processing. HEAT consists of more than 60
networked computers (10 each of Suns, SGIs, IBMs, HPs, DECs, and a larger
number of PCs) running a wide range of operating systems and versions,
intended to provide a rich environment for testing computer and network
security systems and methods. To our knowledge, this is the largest
computer security testbed in service today, and it is used on a daily basis
to test attacks and defenses.

In order to manage attacks and defenses in HEAT at an
affordable cost, automation was needed. Coincidentally, an earlier version
of CID was, at that time, being rewritten to operate on a combination of an
Oracle database server and a Netscape web server. This left several
computers previously used for CID development available for use in
controlling HEAT. This control is complex enough, and watching what is
happening on more than 60 computers simultaneously requires so much display
space, that a 'situation room' was constructed for the purposes of
being able to carry out this effort. The capabilities of this room included
several projection displays, a small network of computers, and a set of
tables and chairs. Given the constraints of space and other facilities, the
room was designated for multiple uses, including instruction, strategic
gaming, and experimentation with HEAT. Thus the Cyberwarfare Center
emerged.

Because experiments on attack and defense against
live computer networks requires many samples of attack and defense
systems, CID, which already had a substantial collection of tools and a
database capability was called into service for coordinating the
experimental capabilities needed for work on HEAT, the demonstration
capabilities and course materials used in training, the scenarios used
for gaming, and the analysis used for experimenting with automated
flexible defenses. Today, CID is used for all of these purposes and
also forms the core of a repository for research and analysis in
information security at Sandia's California site.

CID Management of HEAT

In order to manage HEAT, CID provides a Web-based
interface that permits a menu selection of the actions to be performed
along with check boxes for the HEAT machines the operation is to be
performed on. For example, in order to launch automated attacks against
a set of HEAT machines, the user might select "FTP attacks"
against the checked machines. Those attacks would then be run with
results reported to the browser. The table below depicts this interface
with bold italics used in place of check boxes.

Use only FTP Attacks on

SGI-1

IBM-1

HP-1

DEC-1

SUN-1

SGI-2

IBM-2

HP-2

DEC-2

SUN-2

SGI-3

IBM-3

HP-3

DEC-3

SUN-3

SGI-4

IBM-4

HP-4

DEC-4

SUN-4

SGI-5

IBM-5

HP-5

DEC-5

SUN-5

SGI-6

IBM-6

HP-6

DEC-6

SUN-6

SGI-7

IBM-7

HP-7

DEC-7

SUN-7

SGI-8

IBM-8

HP-8

DEC-8

SUN-8

SGI-9

IBM-9

HP-9

DEC-9

SUN-9

SGI-10

IBM-10

HP-10

DEC-10

SUN-10

Similar interfaces either exist or are being
completed to allow control of processes, control of defenses in the form
of wrapper programs, audit extraction and analysis, and attacks that
simulate modeled threat profiles. In addition to this manual sort of
management, tools are being contemplated for automating the placement
and control of defenses based on a technique called model-based
situation anticipation and constraint.

This management interface can run on multiple
machines simultaneously, thus allowing an attacker and a defender to
participate in a simulated cyber-battle with the attacker granted only
attack capabilities and the defender granted only defensive capabilities
based on access controls available in CID. For demonstrations, the
attack and defense can run on different displays in the cyberwafare
centerís training and control facility with the center screen reserved
for projecting briefing material related to the demonstrated attacks and
defenses. Through the use of collaborative tools, attackers and
defenders in separate gaming suites can be observed from the control
facility with each on a different screen. Game control functions
including sending briefing, status reports, orders, and other
information to the participants is handled via the central display,
observers or referees can watch the process from the central site, and
trainees can watch the battle either as it happens or in replay. This
permits attackers and defenders to review their actions in much the same
way as other sorts of exercises provide feedback.

Future experiments are anticipated using automated
and human defenses against automated and human attackers both for
training humans and for testing automated technologies.

Other Features of CID

In addition to the elements
described above, CID provides several important features. In each of
the examples above, as well as throughout CID, details of each of all
technical terms are available by pressing on the mouse button. This
drill-down capability includes citations to relevant literature which
may also have embedded citations. We are in the process of scanning in
all of the references related to material in CID for instantaneous
access, thus allowing rapid literature search and analysis. CID also
has a search engine to allow general searches of content and drill-down
material for the purpose of finding examples and other related
information. Under attacks and defenses, drill-down capabilities lead
to specific examples of techniques, in some cases including source code
for attacks and defenses that can be applied directly against HEAT or
elsewhere. Detailed drill-down is also provided to allow intel-based
detailing of threats and linkage of case examples to all elements of the
database. In coming implementations, additional capabilities will be
provided for linking alternative classification schemes into CID and
allowing the same analysis of those schemes as is provided for CID's
internal schemes. In such an online collection, access controls are
also required to allow need-to-know access to specific details. CID
currently has limited access control on all records and is being
augmented to allow fine-grained access control to all database elements
including access controlled search and analysis. This permits a user
with limited access to do all of the analytical functions based only on
the knowledge available, while users with more complete access may find
better solutions.

Summary, Conclusions, and Future Work

The use of a cause-effect model for analyzing attacks
and defenses in computer networks appears to have a bright future.
Initial results seem to indicate that computational complexity can be
effectively traded off against model resolution and that this allows
analysis of complex networks for interesting security properties to be
done. The creation of tools in combinations with experimental testbeds
has allowed much of this work to be validated through experiments, but
clearly this work is still in its infancy. Future work is currently
being proposed to move forward from the initial results shown here
toward the design and implementation of increasingly automated and
integrated tools for analyzing and managing larger networks more
effectively.

The classes described herein have synergisms so that standard
statistical techniques may not be effective in analyzing them. For example,
if two attacks are each 90they become 99while two defenses that are 90combined and may even hinder each others' performance.

Complexity: Synergistic effects are not yet understood fully in this
context, however, this makes analysis of attack and defense quite complex
and may make optimization impossible until synergies are better understood.

The classes
described here are described descriptively and - with a few notable
exceptions - have not been thoroughly analyzed or even defined in a
mathematical sense.

Complexity: Except in those few cases where mathematics
has been developed, it is hard to even characterize the issues underlying
these classes, much less attempt to provide comprehensive understandings.

Each class described here may or may not be applicable in or to
any particular situation. While threat profiles and historical information
may lead us to believe that certain items are more or less important or
interesting, there is no scientific basis today for believing that any of
these classes should or should not be considered in any given circumstance.
This sort of judgement is made today entirely on the basis of a judgement
call by decision makers.

Complexity: Dispite this general property, in most
cases, analysis is possible and produces reasonably reliable results over a
reasonably short time frame.

The classes given here are almost certainly incomplete in
covering the possible or even realized attacks and defenses. This is
entirely due to the author's and/or reviewers' lack of comprehensive
understanding at this time.

Complexity: We can only form a complete system
by completely characterizing these issues in a mathematical form - something
nobody has even approached doing to date.

Actors that may Cause Information System Failure (Threats)

Employees, board members, and other internal team members who
have legitimate access to information and/or information technology.

Complexity: Insiders typically have special knowledge of internal controls
that are unavailable to outsiders, and they have some amount of access. In
some cases, they perform only authorized actions - as far as the information
systems have been told. They are typically trusted and those in control often
trust them to the point where placing internal controls against their attacks
are considered offensive.

Private individuals or corporate entities that investigate on a
for-fee basis.

Complexity: Investigators are willing to do a substantial amount of targeted
work toward accomplishing their goals, in some cases they may be willing to
violate the law, they often have contacts in government and elsewhere that
provide information not commonly available, and they commonly use bribes of
one form or another to advance their ends.

People who work for newspapers, news magazines, television,
radio, or other media elements.

Complexity: Reporters often gain access that others do not have, often use
misleading cover stories or false pretenses, commonly try to become friendly
with insiders in order to get information, and have extraordinary power to
publicly punish what they percieve to be or can construe as misdeeds.

People who work under their own control to provide contract
services to others.

Complexity: Consultants often have insider access but are not controlled as
are insiders. Technical consultants who use client information technology
present a technical threat, while management consultants who often have
access to more of the more sensitive information in a company presents a
human threat.

Complexity: Vendors are often in competition with each other over sales and
with you over pricing and terms. They tend to be in long-term relationships
and often work closely with your people. Their economic motives are often
not alligned with yours and in some cases, they take advantage of information
in order to gain economic adantage in negotiations.

Complexity: Customers are often in competition with you over pricing and
terms. Their economic motives are often not alligned with yours and in some
cases, they take advantage of information in order to gain economic adantage
in negotiations. In some cases, customers have worked their way into
companies, extracted information, taken over their suppliers' businesses by
taking advantage of the knowledge gained through their interactions.

Other individuals or companies in the same or similar businesses
and who stand to gain from your loss or who can gain economic advantage by
taking advantage of you.

Complexity: Competitors are commonly percieved as an economic threat, but in
large businesses, they are often collaborators on some projects and
competitors on others. As a result, information technology is often used to
provide access for some purposes. It can be quite tempting to exploit this
access and these relationships in competitive areas.

People who enjoy using computers and exploring the information
infrastructure and systems connected to it.

Complexity: While not generally malicious, these people tend to gather and
exploit tools that open holes to other attackers. They also sometimes make
mistakes or become afraid and feel they have to cover their tracks, thus
causing incidental harm.

People who maliciously break into information systems and
intentionally cause harm in doing so.

Complexity: These people have tools similar to those of hackers, but they
use these tools for malicious purposes and can sometimes cause a great deal
of harm. They are often bold, and often exploit indirect links to make it
hard to trace them back to their source.

Groups who roam the information infrastructure breaking into systems
and doing harm for fun and profit.

Complexity: These groups are generally willing to exploit commonly known
attacks as well as an occasional novel attack. Perception management and
dumpster diving are some of their favorite tools. They are often emboldened by
group dynamics.

People hired to demonstrate vulnerabilities in systems by
exploiting those vulnerabilities.

Complexity: These people are usually honest, but sometimes they are not. In
addition, they often fail to properly repair the systems they try to break
into, thus leaving residual vulnerabilities. Their skills vary widely, from
rank ametures using off-the-shelf software - to true experts with a high
degree of sophistocation. It is often hard to tell which is which unless you
are an expert.

People who typically have access to physical locations in order
to do routine maintenance tasks.

Complexity: Maintenance people commonly introduce viruses by accident. They
often have far more physical access than even highly trusted employees, they
are often allowed in sensitive areas alone and at off-hours, they are usually
poorly paid and assumed to have little knowledge, and they are often trusted
with items of high value.

Complexity: Professional theives typically use the best tools they can find,
practice ahead of time for major thefts, and use highly coordinated efforts
to achieve their goals. They have historically tended toward physical means,
but this may be changing.

People who believe in a cause to the point where they take
action in order to forward their ends.

Complexity: These people can be extremely zealous - even when they are
misdirtected. They often consider one viewpoint to the exclusion of all
others, try to maximize harm to their victim without regard to competitive
issues or personal gains, and typically use physical means - sometimes with
the additional element of publicity as part of their motive.

Complexity: These groups typically have a lot of money and are willing to
spend it in order to get what they want. They typically want to launder
money, eliminate competition, retain control over their dealer networks, and
keep law enforcement away. They use violence and physical coorcion easily.

People who professionally gather information and commit
sabotage for governments.

Complexity: These people are highly trained, highly funded, backed by
substnatial scientific capabilities, directed toward specific goals, and
skillful at avoiding detection. They can be very dangerous to life and
property.

Complexity: These people often have powers of search and seizure, are
usually poorly paid, wield guns, have powers of arrest, and in much of the
world are easily corrupted. They tend to use physical means.

Complexity: These groups are highly funded, often made up largely of
professionals, they commonly have indirect powers of search and seizure,
sometimes wield guns, have indirect powers of arrest, and in much of the
world are easily corrupted. They often use highly sophistocated means.

Complexity: These groups typically have access to accurate weapons and high
explosives, they are oriented toward causing serious physcial harm, often
have the goal of causing permanent harm, do not hesitate to kill people, and
act at the behest of governments, and with their full and open support.

Companies, groups, and governments that compete on a large
scale with your companies, groups, and governments.

Complexity: While economic rivals are usually merely competitive, sometimes
they become rather extreme in their desire for technical information and
attack in order to gain technical expertise. They tend to be well funded,
have a lot of expertise, and typically operate from locations which provide
legal cover for their actions.

Complexity: When countries decide to attack other countries in the
information arena, they often use stealth to try to provide for plausible
deinability, however this is not always the case, and they often fail to
achieve true anonymity. Responses may lead to escalation - and in some
cases - escalation can lead to full-scale war.

Complexity: Militaries tend to blow things up, however, in the more advanced
military organizations, information is exploited to maximize their advantage
and neutralize opponent capabilities. Physical destruction is often avoided
in order to preserve infrastructure used after the conflict has ended. They
tend to have and use exotic as well as every-day capabilities.

People who specialize in attacking information systems as
part of government-sponsored military operations.

Complexity: Information warriors may use any or all of the known techniques
as well as techniques developed especially for their use and kept secret in
order to attain miltary advantage. They tend not to kill people
unnecessarily.

Mechanisms by Which Information Systems are Caused to Fail (Attacks)

Erroneous entries or missed entries by designers, implementer,
maintainers, administrators, and/or users create vulnerabilities exploited
by attackers. Examples include forgetting to eliminate default accounts and
passwords when installing a system, incorrectly setting protections on
network services, and a wide range of other minor mistakes that can lead to
disaster.
Complexity: There appear to be an unlimited (finite but
unbounded) number of possible errors and omissions in general purpose
systems. Special-purpose systems may be more constrained.

Changes on the surface of the sun cause excessive amounts of
radiation to be delivered, typically resulting in noise bursts on radio
communications, disrupted communications, and other changed physical
conditions.

System maintenance causes period of time when systems operate
differently than normal and may result in temporary or permanent
inappropriate or unsafe configurations. Maintenance can also be exploited by
attackers to create forgeries of sites being maintained, to exploit temporary
openings in systems created by the maintenance process, or other similar
purposes. Maintenance can accidentally result in the introduction of
viruses, by leaving improper settings, and by other similar accidental
events.
Complexity: Statistical
techniques and historical data appear to be quite sufficient to analyze
system maintenance.

Inadequate maintenance results in uncovered failures over
extended periods of time, possibly inducing a period of time when systems
operate differently than normal and may result in temporary or permanent
inappropriate or unsafe configurations.
Complexity: Statistical
techniques and historical data appear to be quite sufficient to analyze
maintenance adequacy.

Complexity: Detecting Trojan horses is almost certainly an undecidable
problem (although nobody has apparently proven this it seems clear) but
inadequate mathematical analysis has been done in this subject to provide
further clarification.

Impersonations or false identities are used to bypass controls,
manage perception, or create conditions amenable to attack. Examples
include spies, impersonators, network personae, fictional callers, and many
other false and misleading identity-based methods.

Complexity: This appears
to be a very complex social, political, and analytical issue that is nowhere
near being solved.

Complexity: Setting protections properly is not a trivial
matter, but there are linear time algorithms for automating settings once
there is a decision procedure in place to determine what values to set
protection to. No substantial mathematical analysis has been published in
this area and no results have been published for the complexity of building
a decision procedure, however it is known that, under some conditions, it is
impossible to have settings that both provide all appropriate access and
deny all inappropriate access.
[Cohen91] It is known to be undecidable
for a general purpose subject/object system whether a given subject will
eventually gain any particular right over any particular object.
[Harrison76]

Resources are manipulated so as to make functions requiring
those resources operate differently than normal. Examples include e-mail
overflow used to disrupt system operation,
[Cohen93] file handle
consumption used to prevent audits from operating,
[Cohen91] and
overloading unobservable network paths to force communications to use
observable paths.

Complexity: Most of the issues with resource availability result from the
high cost of making worst-case resources available. As a result, a tradeoff
is made in the design of systems that assures that under some (hopefully
unlikely) conditions resources will be exhausted while providing a suitably
high likelihood of availability under almost all realistic situations. The
general complexity involved with most resource allocation problems in which
limited resources are available is at least NP-complete.

Causing people to believe things that
forward the goal. Examples include tricking a person into giving you their
password or changing their password to a particular value for a period of
time, talking your way into a facility, and causing people to believe in
religious doctrine in order to get them to behave as desired.

Complexity: This has been a security issue since the beginning of time and
appears to be a very complex human, social, political, and legal issue. No
substantial progress has been made to date in resolving this issue.

Creating false or misleading information in order to fool a
person or system into granting access or information not normally
available. Examples include operator spoofing to trick the operator into
making an error or giving away a password, location spoofing to trick a
person or system into believing a false location, login spoofing which
creates a fictitious login screen to get users to provide identification and
authentication information, email spoofing which forges email to generate
desired results, and time spoofing which creates false impressions of
relative or absolute time in order to gain advantage.

Complexity: Although
no deep mathematical analysis of this area has been published to date, it
appears that this issue does not involve any difficult mathematical
limitations. Limited results in providing secure channels have indicated
that such a process is not complex but that it may depend on cryptographic
techniques in some cases, which lead to substantial mathematical issues.

Interfering with infrastructure so as to disrupt services
and/or redirect activities. Examples include creating an accident on a
particular road at a particular place and time in order to cause a shipment
to be rerouted through a checkpoint where components are changed, taking
down electrical power in order to deny information services, modifying a
domain name server on the Internet in order to alter the path through which
information flows from point to point, and cutting a phone line in order to
sever communications.

Complexity: Although no mathematical analysis has
been published on this issue to date, it appears that analyzing
infrastructure interference is quite complex and involves analysis of all of
the infrastructure dependencies if the attack is to be directed and
controlled. Similarly, the detection and countering of such an attack
appears to be quite complex. It would appear that this is at least as
complex as solving multiple large min-cut problems. Some initial analysis
of U.S. information infrastructure dependencies has been done and has led to
a report of about 1,000 pages which only begins to touch the surface of the
issue.
[SAIC-IW95]

Examining the infrastructure in order to gain information.
Examples include watching air ticketing information in order to see when
particular people go to particular places and using this as an intelligence
indicator, tapping a PBX system in order to record key telephone
conversations, and watching for passwords on the Internet in order to gain
identification and authentication information to multiple computers.

Complexity: Except in cases where cryptography, spread spectrum, or other
similar technology is used to defend against such an attack, it appears that
infrastructure observation is simple to accomplish and expensive to detect.
No mathematical analysis has been published to date.

Insertion of information in transit so as to forge desired
communications. Examples include adding transactions to a transaction
sequence, insertion of routing information packets so as to reroute
information flow, and insertion of shipping address information to replace
an otherwise defaulted value.

Complexity: Although there appears to be a
widespread belief that insertion in transit is very difficult, in most cases
it is technically straight forward. Complexity only arises when defensive
measures are put in place to detect or prevent this sort of attack.

Complexity: Except in cases where cryptography, spread
spectrum, or other similar technology is used to defend against such an
attack, it appears that observation in transit over physically insecure
communications media is simple to accomplish and expensive to detect. In
cases where the media is secured (e.g., interprocess communication within a
single processor under a secure operating system) some method of getting
around any system-level protection is also required.

Modification of information in transit so as to modify
communications as desired. Examples include removing end-of-session requests
and providing suitable replies, then taking over the unterminated
communications link, modification of an amount in an electronic funds
transfer request, and rewriting Web pages so as to reroute subsequent
traffic through the attacker's site.

Complexity: Modification in transit is
roughly equivalent in complexity to the combination of observation in
transit and insertion in transit, however, because of the real-time
requirements for some sorts of modification in transit, the difficulty of
successful attack may be significantly increased.

Creating or exploiting positive feedback loops or underdamped
oscillatory behaviors so as to overload a system. Examples include electrical
or acoustic wave enhancement, the creation of packets in the Internet which
form infinite communications loops, and protocol errors causing cascade
failures in telephone systems.

Complexity: In some underdamped systems,
sympathetic vibration is easily induced. It sometimes even happens
accidentally. In over-damped systems, sympathetic vibration requires
additional energy. In logical systems - such as protocol driven networks -
the complexity of finding an oscillatory behavior is often very low. A
simple search of the Internet protocols leads to several such cases. More
generally, finding such cases may involve N-fold combinations of protocol
elements which is exponential in time and linear in space. Proving that
protocols are free of such behaviors is known to be at least NP-complete.
[Bochmann77][Danthine82][Hailpern83][Merlin79][Palmer86][Sabnani85][Sarikaya82][Sunshine79]

Design flaws in tightly coupled systems that cause error
recovery procedures to induce further errors under select conditions.
Examples include the electrical cascade failures in the U.S. power grid,
[WSCC96] telephone system cascade failures causing widespread long
distance service outages,
[Pekarske90] and inter-system cascades such as
power failures bringing down telephone switches required to bring back up
power stations.

Complexity: Only cursory examination of select cascade
failures has been completed, but initial indications are that the complexity
of creating a cascade failure varies with the situation. In systems
operating at or near capacity, cascade failures are easily induced and must
be actively prevented or they occur accidentally.
[WSCC96]
As systems move further away from being tightly coupled and
near capacity, cascade failures become for more difficult to accomplish. No
general mathematical results have been published to date, but it appears
that analyzing cascade failures is at least as complex as fully analyzing
the networks in which the cascades are to be created, and this is known for
many different sorts of networks.

Promises or threats that cause trusted parties to violate their
trust. Examples include bribing a guard to gain entry into a building,
kidnaping a key employee's family members to gain access to a computer
system, and using sexually explicit photographs to convince a trusted
employee to provide insider information.

Complexity: This issue is as
complex as the general problem of insider attacks. It appears to be
uncharacterizable mathematically, but may be modeled by statistical
techniques.

An
attacker gets a job in order to gain insider access to a facility. Examples
include getting a maintenance job by under-bidding opponents and then
stealing and selling inside information to make up for the cost difference,
the planting of spies in intelligence agencies of competitors, and other
similar sorts of moles.

Complexity: This issue is as complex as the general
problem of insider attacks. It appears to be uncharacterizable
mathematically, but may be modeled by statistical techniques.

Sequences of passwords are tried against a system or password
repository in order to find a valid authentication. Examples include
running the program "crack" on a stolen password file, guessing passwords on
network routers and PBX switches, and using well-known maintenance passwords
to try to gain entry.

Complexity: Password guessing has been analyzed in
painstaking detail by many researchers. In general, the problem is as hard
as guessing a string from a language chosen by an imperfect random number
generator.
[Cohen85] The complexity of attack depends on the
statistical properties of the generator. For most human languages there are
about 1.2 bits of information per symbol,
[Shannon49] so for an
8-symbol password we would expect about 9.8 bits of information and thus an
average of about 500 guesses before success. Similarly, at 2 attempts per
user name (many systems use thresholds of 3 bad guesses before reporting ana
anomaly) we would expect entry once every 250 users. For 8-symbol passwords
chosen uniformly and at random from an alphabet of 100 symbols, 5 quadrillion
guesses would be required on average.

Invalid values are used to cause unanticipated behavior.
Examples include system calls with pointer values leading to unauthorized
memory areas and requests for data from databases using system escape
characters to cause interprocess communications to operate improperly.

Complexity: In most cases, only a few hundred well-considered attempts are
required to find a successful attack of this sort against a program. No
mathematical theory exists for analyzing this in more detail, but a
reasonable suspicion would be that several hundred common failings make up
the vast majority of this class of attacks and that those sorts of flaws
could be systematically attempted. There is some speculation that software
testing techniques
[Lyu95] could be used to discover such flaws, but no
definitive results have been published to date.

Functions not included in the
documentation or unknown to the system owners or operators are exploited to
perform undesirable actions. Examples include back doors placed in systems
to facilitate maintenance,

undocumented system calls commonly inserted by vendors to enable special
functions resulting in economic or other market advantages, and program
sequences accessible in unusual ways as a result of improperly terminated
conditionals.>
undocumented system calls commonly inserted by vendors to enable special
functions resulting in economic or other market advantages, and program
sequences accessible in unusual ways as a result of improperly terminated
conditionals.
Complexity: Back-doors and other intentional functions are
normally either known or not known. If they are known, the attack takes
little or no effort. Finding back-doors is probably, in general, as hard as
demonstrating program correctness or similar problems that are at least
NP-complete and may be nearly exponential depending on what has to be shown.
There is some speculation that decision and data flow analysis might lead to
the detection of such functions, but no definitive results have been
published to date.

Lack of adequate notice is used as an excuse to
do things that notice would normally have prohibited or warned against.
Examples include unprosecutable entry via normally unused services, password
guessing through an interface not providing notice, and Web server attacks
which bypass any notice provided on the home page.

Complexity: Notice is
trivially demonstrated to be given or not given depending on the method
entry. The most effective overall protection from this sort of exploit
would be the change of laws regarding certain classes of attacks.

A program, device, or person is granted
privileges not strictly required in order to perform their function and the
excess privilege is exploited to gain further privilege or otherwise attack
the system. Examples include Unix-based SetUID programs granted root access
exploited to grant attackers unlimited access, access to unauthorized
need-to-know information by a systems administrator granted too-flexible
maintenance access to a network control switch, and user-programmable DMA
devices reprogrammed to access normally unauthorized portions of memory.

Complexity: Determining whether a privileged program grants excessive
capabilities to an attacker appears, in general, to be as hard as proving
program correctness, which is at least NP-complete and may be nearly
exponential depending on what has to be shown. Determining what privileges
a program should be granted and has been granted may be somewhat easier
but no substantial analysis of this problem has been published to date.

The computing environment upon which programs or people depend
for proper operation is corrupted so as to cause those other programs to
operate incorrectly. Examples include manipulating the Unix FS environment
variable so as to cause command interpretation to operate unusually,
altering the PATH (or similar) variable in multi-user systems to cause
unintended programs to be used, and manipulation of a paper form so as to
change its function without alerting the person filling it out. In the
physical domain, this includes the introduction of gasses, dust, or other
particles, chemicals, or elements into the physical environment. In the
electromagnetic realm, it includes waveforms. In the human sense, sound,
smell, feel, and other sensory input corruption is included.

Complexity: In most computing environments, there are only a relatively
small number of ways that environment variables get set or used. This
limits the search for such vulnerabilities substantially, however, the ways
in which environmental variables might be used by programs in general is
unlimited. Thus the theoretical complexity of identifying all such problems
would likely be at least NP-complete. This would seem to give computational
leverage to the attacker.

Access to a device is exploited to alter its function or cause
its function to be used in unanticipated ways. Examples include removing
shielding from a wire so as to cause more easily received electromagnetic
emanations, reprogramming a bus device to deny services at a hardware level,
and altering microcode so as to associate attacker-defined hardware functions
with otherwise unused operation codes.

Complexity: Since hardware devices
are, in general, at least as complex as software devices, the complexity of
detecting such a flaw would appear to be at least NP-complete. Injecting
such a flaw, on the other hand, appears to be quite simple - given physical
access to a device.

Mismatches between models and the realities they are intended
to model cause the models to break down in ways exploitable by attackers.
Examples include use of the Bell-LaPadula model of security
[Bell73] as
a basis for designing secure operating systems - thus leaving disruption
uncovered, modeling attacks and defenses as if they were statistically
independent phenomena for risk analysis - thus ignoring synergistic effects,
and modeling misconfigurations as mis-set protection bits - when the content
of configuration files remains uncovered.

Complexity: There is some theory
about the adequacy of modeling, however, there is no general theory that
addresses the protection-related issues of modeling flaws. This appears to
be a very complex issue.

Two or more simultaneous or split multi-part access attempts
are made, resulting in an improper decision or loss of audit information.
Examples include the use of large numbers of access attempts over a short
period of time so as to cause grant/refuse decision software to act in a
previously unanticipated and untested fashion, the execution of sequences of
operations required for system takeover by multiple user identities, and the
holding of a resource required for some other function to proceed so as to
deny completion of that service.

Complexity: This problem has been analyzed
in a cursory fashion and the number of possible sequences of events appears
to be factorial in the combined lengths of the programs coexisting in the
environment.
[Cohen94-3] Clearly a full analysis is infeasible for even
simplistic situations. It is closely related to the interrupt sequence
mishandling problem.

Programs operating in a shared environment inappropriately
trust the information supplied to them by untrustworthy programs. Examples
include forged data from Domain Name Servers in the Internet used to reroute
information through attackers, forged replies from authentication daemons
causing untrusted software to be run by access control software, forged
Network Information Service packets causing wrong password entries to be
used in authenticating attackers, and network-based administration programs
that can be fooled into forwarding incorrect administrative controls.

Complexity: In general, analyzing this problem would seem to require
analyzing all of the interdependencies of programs. In today's networked
environment, this would appear to be infeasible, but no detailed analysis
has been published to date.

Unanticipated or incorrectly handled interrupt sequences cause
system operation to be altered unpredictably. Examples include stack frame
errors induced by incorrect interrupt handling, the incorrect swapping out of
the swapping daemon on unanticipated conditions, and denial of services
resulting from improper prioritization of interrupts.

Complexity: This
problem has been analyzed in a cursory fashion and the number of possible
sequences of events appears to be factorial in the combined lengths of the
programs coexisting in the environment.
[Voas93] Clearly a full analysis is
infeasible for even simplistic situations. It is closely related to the
simultaneous access exploitation problem.

An emergency condition is induced resulting in behavioral
changes that reduce or alter protection to the advantage of the attacker.
Examples include fires, during which access restrictions are often changed
or less rigorously enforced, power failures during which many automated
alarm and control systems fail in a safe mode with respect to some -
possibly exploitable - criteria, and computer incident response during which
systems administrators commonly deviate - perhaps exploitably - from their
normal behavioral patterns.

Complexity: In most cases, emergency procedures
bypass many normal controls, and thus many attacks are granted during an
emergency that would be far more difficult during normal operations. No
complexity measure has been made of this phenomena to date.

Systems that depend on synchronization are desynchronized
causing them to fail or operate improperly. Examples include DCE servers
that may deny services network-wide when caused to become desynchronized
beyond some threshold, cryptographic systems which, once desynchronized may
take a substantial amount of time to resynchronize, automated software and
systems maintenance tools which may make complex decisions based on slight
time differences, and time-based locks which may be caused to open or close
at the wrong times.

Complexity: This problem appears to be similar in
complexity to the interrupt sequence mishandling problem.
[Voas93] It
appears, in general, to be factorial in the number of time-based decisions
made in a system, however, their may be substantial results in the field of
communicating sequential processes that lead to far simpler solutions for
large subclasses.

Daemon programs designed to provide privileged services upon
request have imperfections that are exploited to provide privileges to the
attacker. Examples include Web, Gopher, Sendmail, FTP, TFTP, and other
server daemons exploited to gain access to the server from over a network,
internal use only daemons such as the Unix cron facility exploited to gain
root privileges by otherwise unprivileged users, and automated backup and
recovery daemons exploited to overwrite current versions of programs with
previous - more vulnerable - versions.

Complexity: In general, this problem
is at least as complex as proving program correctness, which is at least
NP-complete and may be nearly exponential depending on what has to be shown.
Only a few daemons have ever been shown to avoid large subsets of these
exploits
[Cohen97] and those daemons are not widely used.

The introduction of multiple errors is used to cause otherwise
reliable software to fail in unanticipated ways. Examples include the
creation of an input syntax error with a previously locked error-log file
resulting in inconsistent data state, the premature termination of a
communications protocol during an error recovery process - possible causing
a cascade failure, and the introduction of simultaneous interleaved attack
sequences causing normal detection methods to fail.
[Hecht93][Thyfault92]

Complexity: The limited work on multiple error effects
indicates that even the most well-designed and trusted system fail
unpredictably under multiple error conditions. This problem appears to be
even more complex than proving program correctness, perhaps even falling
into the factorial time and space realm. For an attacker, producing multiple
errors is often straightforward, but for a defender to analyze them all is
essentially impossible under current theory.

Programs that reproduce and possibly evolve. Examples include
the 11,000 or so known viruses, custom file viruses designed to act against
specific targets, and process viruses that cause denial of service or
thrashing within a single system.

Complexity: Virus detection has been
proven to be undecidable in the general case.
[Cohen86][Cohen84] Viruses are also trivial to write and highly effective
against most modern systems.

Modification of data through unauthorized means. Examples
include non-database manipulation of database files accessible to all users,
modification of configuration files used to setup further machines, and
modification of data residing in temporary files such as intermediate files
created during compilation by most compilers.

Complexity: Data diddling is
a relatively simple task. If the data is writable, it can be easily
diddled, and if it is not writable, diddling is impossible until this
condition changes.

Electromagnetic emanations are observed from afar. Examples
include the tapping of Scotland Yard by a reporter to demonstrate a
$100
remote tapping device and observed emanations from financial institutions
indicative of pending trades.

Complexity: van Eck bugging is relatively
easy to do and requires only cursory knowledge of electronics and antennae
theory.[vanEck85]

Jamming signals are introduced to cause failures in electronic
communications systems. Examples include the method and apparatus for
altering a region in Earth atmosphere, ionosphere, and/or magnetosphere, and
common radio jamming techniques.

Complexity: Simplistic jamming is straight
forward, however, power efficient jamming is necessary in order to have good
effect against spread spectrum and similar anti-jamming systems, and this is
somewhat more complex top achieve.

Point Branch eXchanges or similar switching centers are
attacked in order to exploit weaknesses in their design allowing connected
telephone instruments to be tapped. Examples include on-hook bugging of
hand-held instruments, open microphone listening, and exploitation of silent
conference calling features.

Complexity: In cases where functions that
support bugging are provided by the PBX, this attack is straight forward. In
cases where no such function is provided, it is essentially impossible.
Determining which is the case is non-trivial in general, but in practice it
is usually straightforward.

Audio and video input devices connected to computers for
multi-media applications are exploited to allow attackers to look at and
listen to events at remote locations. Examples include most versions of
video and audio equipment currently connected to multi-media workstations
and some video-phone systems.

Complexity: Audio and video viewing attacks
normally depend on breaking into the operating system and then enabling a
built-in function. The complexity lies primarily in breaking into the
system and not in turning on the viewing function.

Repair processes are exploited to extract, modify, or destroy
information. Examples include computer repair shops copying information and
reselling it and maintenance people introducing computer viruses.

Complexity: This attack requires involvement in the repair process and is
normally not directed at a particular victim from its inception but rather
directed toward an audience (market segment). There is little complexity
involved in carrying out the attack once the position as a repair provider is
established.

Break into the wire closet and alter the physical or logical
network so as to grant, deny, or alter access. Examples include wire tapping
techniques, malicious destruction of wiring causing service disruption, and
the introduction of video tape players into surveillance channels to hide
physical access.

Complexity: Wire closet attacks require only technology
knowledge, access to the wire closet, and a goal. The complexity of finding
the proper circuits to attack is normally within the knowledge level of a
telephone service person or other wire-person.

Watching over peoples' shoulders as they use information or
information systems. Examples include watching people as they enter their
passwords, watching air travelers as they use their computers and review
documents while in flight, and observing users in normal operations to
understand standard operating procedures.

Legitimately accessible data is aggregated to derive
unauthorized information. Examples include getting the total departmental
salary figures just before and after a new employee is hired to derive the
salary of the new hire, attending a wide range of unclassified but private
meetings in a particular area in order to gain an overall picture of what
work a group is doing, and tracking movements of many people from a
particular organization and correlating that information with job titles and
other events to derive intelligence indicators.

Complexity: Data
aggregation can be quite complex both to perform and to protect against.
Some work on protecting against these attacks has led to identifying
NP-complete problems, while gathering information through this technique may
involve solving a large number of equations in a large number of unknowns
and is similar to integer programming problems in complexity.

Bypassing a normal process in order to gain advantage.
Examples include retail returns department employees entering false return
data in order to generate refund checks, use of computer networks to
generate additional checks after the legitimate checks have passed the last
integrity checks, and altering pricing records to reflect false inventory
levels to cover up thefts.

Complexity: This attack is often accomplished by a relatively
unsophisticated attacker using only knowledge gained while on the job. The
complexity of many such attacks is low, however, in the general case it may
be quite difficult to assure that no such attacks exist without a particular
level of collusion. Not formal analysis has been published to date.

The content sent to an interpretive mechanism causes that
mechanism to act inappropriately. Examples include Web-based URLs that
bypass firewalls by causing the browser within the firewall to launch
attacks against other inside systems, macros written in spreadsheet or word
processing languages that cause those programs to perform malicious acts,
and compressed archives that contain files with name clashes causing key
system files to be overwritten when the archive is decompressed.

Complexity: Many content-based attacks are quite simple or are easily
derived from published information. They tend to be quick to operate and
simple to program. More sophisticated attacks exploiting a content-based
flaw may require far more attack prowess. No mathematical analysis has been
published of this class of attacks to date.

Backups protected less comprehensively than on-line copies of
information are attacked. Examples include the placement of magnetic
devices in backup storage areas in order to erase or corrupt magnetic
backups, the infection of backup media by computer viruses, and the theft of
backup media being disposed near the end of its lifecycle.

Complexity: Except in cases where backup information is encrypted, back-up
attacks are straightforward and introduce little complexity. In the case of
aging backup tapes some signal processing capabilities may be required in
order to reliably read sections of media, but this is not very complex or
expensive.

The process used to restore information from backup tapes is
corrupted or misused to the attackers advantage. Examples include the
creation of fake backups containing false information, alteration of tape
head alignments so that restoration fails, and the use of privileged
restoration programs to grant privilege by restoring protection settings or
ownerships to the wrong information.

Complexity: Creating fake backups may
be complicated by having to reproduce much of what is present on actual
backups on the particular site, by having to create CRC codes for replaced
components of a backup and by having to recreate an overall CRC code for the
entire backup when altering only one component. None of these operations
are very complex and all can be accomplished with near-linear time and space
techniques.

Activity termination protocols fail or are interrupted so that
termination does not complete properly and the protocol is taken over by the
attacker. Examples include modem hangup failures leaving logged-in terminal
sessions open to abuse, interrupted telnet sessions taken over by attackers,
preventing proper protocol completion as in the Internet SYN attacks so as
to deny subsequent services, and refusing to completely disconnect from a
call-back modem at the CO, causing the call-back mechanism to become
ineffective.

Complexity: These classes of attacks are normally simple to
carry out with probabilistic effects depending on the environment.

Call forwarding capabilities are abused. Examples include the
use of computer controlled call forwarding to forward calls from call-back
modems to that attackers get the call-backs, forwarding calls to
illegitimate locations so as to intercept communications and provide false
or misleading information, and the use of programmable call forwarding to
cause long distance calls to be billed to the forwarding party's account.

Complexity: This class of attacks are relatively simple to carry out but
often require a precondition of breaking into a system involved in the
forwarding operation.

Excessive input is used to overrun input buffers, thus
overwriting program or data storage so as to grant the attacker undesired
access. Examples include sendmail overflows resulting in unlimited system
access from attackers over the Internet, Web server overflows granting
Internet attackers unlimited access to Web servers, buffer overruns in
privileged programs allowing users to gain privilege, and excessive input
used to overrun input buffers causing loss of critical data so as to deny
services or disrupt operations.

Complexity: In the case of denial of
service, these attacks are trivial to carry out with a high probability of
success. If the attacker wishes to gain access for more specific results,
it is usually necessary to identify characteristics of the system under
attack and create a customized attack version for each victim configuration.
This is not very complex but it is time and resource consumptive.

Values not permitted by the specification but allowed to pass
the implementation are used to cause abnormal results. Examples include
negative dates producing negative interest which accrues to the benefit of
the attacker, cash withdrawal values which overflow signed integers in
balance adjustment causing large withdrawals to appear as large deposits, and
pointer values sent to system calls that point to areas outside of
authorized address space for the calling party.

Complexity: Most such
attacks are easily carried out once discovered, but systematically
discovering such attacks is, in general, similar to the complexity of gray
box testing until the first fault is found.

Data left as a result of incomplete or inadequate deletion is
gathered. Examples include object reuse attacks like the DOS undelete
command in insecure operating systems, electromagnetic analysis of deleted
media to regain deleted bits, and electron microscopy techniques used to
extract overwritten data.

Complexity: Residual data gathering in the case
of simple undeletions or allocating large volumes of space and examining
their content is straightforward. Looking for residual data on magnetic media
using electromagnetic measurements and electron microscopy is somewhat m,ore
complex and requires statistical analysis and correlation of signals in a
signal processing component. While this is not trivial, it is within the
capability of most electrical engineers and electronics specialists.

Programs with privilege are misused so as to provide
unauthorized privileged functions. Examples include the use of a backup
restoration program by an operator to intentionally restore the wrong
information, misuse of an automated script processing facility by forcing it
to make illicit copies of legitimate records, and the use of configuration
management tools to create vulnerabilities.

Complexity: Once a
vulnerability has been identified, exploitation is straightforward.
Systematically discovering such attacks is, in general, similar to the
complexity of gray box testing until the first fault is found.Attack67:error-induced misoperation -

Errors caused by the attacker induce incorrect operations.
Examples include the creation of a faulty network connection to deny network
services, the intentional introduction of incorrect data resulting in
incorrect output (i.e., garbage in - garbage out), and the use of a
scratched and bent diskette in a disk drive to cause the drive to
permanently fail.

Audit trails are prevented from operating properly. Examples
include overloading audit mechanisms with irrelevant data so as to prevent
proper recording of malicious behavior, network packet corruption to prevent
network-based audit trails from being properly recorded, and consuming some
resource critical to the auditing process so as to prevent audit from being
generated or kept.

Complexity: This class of attacks has not been thoroughly analyzed from a
mathematical standpoint, but it appears that in most systems, audit trail
suppression is straightforward. It may be far more difficult to accomplish
this in a system designed to provide a high assurance of audit
completeness.

Stresses induced on a system cause it to fail. Examples include
paging monsters that result in excessive paging and reduced performance,
process viruses that consume various system resources, and large numbers of
network packets per unit time which tie up systems by forcing excessive
high-priority network interrupt processing.

Complexity: Although some
attacks of this sort appear to be available without substantial effort, in
general, understanding the implications of stress on multiprocessing systems
is beyond the current theory. It appears from a cursory examination that
this is at least as complex as the interrupt sequence problem which appears
to be factorial in the number of instructions in each of the simultaneous
processes.

Known hardware or system flaws are exploited by the attacker.
Examples include a hardware flaw permitting a power-down instruction to be
executed by a non-privileged user, causing an operating system to use results
of a known calculation error in a particular microprocessor for a key
decision, and sending a packet with a parameter that is improperly handled
by a network component.

Complexity: Discovering hardware flaws is, in general, similar in complexity
to discovering software flaws, which makes this problem at least
NP-complete.

Causing illegitimate updates to be made. Examples include
sending a forged update disk containing attack code to a victim,
interrupting the normal distribution channel and introducing an
intentionally flawed distribution tape to be delivered, and substituting a
false update disk for a real one at the vendor or customer site.

Complexity: This attack appears to be easily carried out against many
installations and examples have shown that even well-trained and adequately
briefed employees fail to prevent such an attack. In cases where relatively
secure distribution techniques are used, the complexity may be driven up,
but more often than not, the addition of a disk will bypass even this sort
of process.

Characteristics of network services are exploited by the
attacker. Examples include the creation of infinite protocol loops which
result in denial of services (e.g., echo packets under IP), the use of
information packets under the Network News Transfer Protocol to map out a
remote site, and use of the Source Quench protocol element to reduce traffic
rates through select network paths.

Complexity: Analyzing protocol
specifications to find candidate attacks appears to be straightforward and
implementing many of these attacks has proven within the ability of an
average programmer. In general, this problem would appear to be as complex as
analyzing protocols which has been studied in depth before and shown to be
at least NP-complete for certain subclasses of protocol elements.

A set of attackers use a set of vulnerable
intermediary systems to attack a set of victims. Examples include a
Web-based attack causing thousands of browsers used by users at sites all
around the world to attack a single victim site, a set of simultaneous
attacks by a coordinated group of attackers to try to overwhelm defenses,
and an attack where thousands of intermediaries were fooled into trying to
gain access to a victim site.

Complexity: Devising DCAs appears to be simple while tracing a DCA to a
source can be quite complex. Early results indicate that tracking a DCA to
a source is exponential in the number of intermediaries involved, while
detecting a high-volume DCA appears to be straightforward.

The attacker positions forces between two communicating parties
and both intercepts and relays information between the parties so that each
believes they are talking directly to the other when, in fact, both are
communicating through the attacker. Examples include attacks on public key
cryptosystems permitting a man-in-them-middle to fool both parties, attacks
wherein an attacker takes over an ongoing telecommunications session when
one party decides to terminate it, and attacks wherein an attacker inserts
transactions and prevents responses to those transactions from reaching the
legitimate user.

Complexity: Man-in-the-middle attacks normally require the implementation of
a near-real-time capability, but there are no mathematical impediments to
most such attacks.

The attacker gets one of the parties to encrypt or sign one or
more messages of the attacker's choosing, thus causing information about the
victim's system to be revealed. Examples include causing a user of the RSA
signature system to reveal their secret key through a series of signatures,
the introduction of malicious commands into the data entry stream of a
victim who is blindly following directions of a remote person claiming to be
assisting them, and inducing a bank to make a series of attacker-specified
transactions so as to cause cryptographic protocols, methods, or keys to be
revealed.

Complexity: Selected plaintext attacks have differing complexity depending
on the system under attack. Attacks on RSA systems have been shown to be
linear in time and polynomial in space.

Communicated information is replayed and causes unanticipated
side effects. Examples include the replay of encrypted funds transfer
transmissions so as to cause multiples of an original sum of money to be
transferred, replay of coded messages causing the repeated movement of
troops, replay of transaction sequences that simulate behavior so as to
cover up actual behavior, and the delayed replay of events such as races so
as to deceive a victim.

Complexity: Replay attacks are typically simple to perform and require
little or no sophistication. In some cases, relatively complex coding may
be required in order to reproduce CRC codes or checksums, but this is
normally not required for replay attacks.

Cryptographic techniques are analyzed so as to find methods to
break codes used to secure information. Examples include frequency analysis
for breaking monoalphabetic substitution ciphers, index of coincidence
analysis for breaking polyalphabetic substitution ciphers, the breaking of
the Enigma cipher in World War II through mathematical and optical
techniques combined with knowledge of keys and key usage, exhaustive attacks
on the DES encryption standard, code-listeners for breaking many analog
speech encoding systems, and improved factoring for breaking cryptosystems
based on modular arithmetic.

Complexity: Cryptanalysis is a widely studies
mathematical area and typically involves a great deal of expertise and
computing power against modern cryptographic systems. Cryptanalysis of
improperly designed systems and of systems more invented before the 1940s is
almost universally accomplished by relatively simple automation.

Keys in cryptographic systems are managed by imperfect
management systems that are attacked in order to gain access to keying
materials. Examples include attacks based on inadequate randomness in key
generation techniques, exploitation of selected plaintext attacks against
inadequately implemented automated encryption systems, and breaking into
computers housing keying materials.

Complexity: Many key management attacks
require a substantial amount of computing power, but this is normally on the
order of only a few million computations to break a key that could not be
broken exhaustively under any feasible scheme. The complexity of these
attacks tends to be specific to the particular key management system. In
many cases, the weakest link is the computer housing the keys and this is
often attacked in a relatively small amount of time through other
techniques.

Channels not normally intended for information flow are used to
flow information. Examples include widely known covert channels in secure
operating systems, time-based covert channel exploitation in encryption
engines, and covert channels created by the association of movements of
people with activities.

Complexity: It has been shown that in any system using shared resources in a
non-fixed fashion, covert channels exist. They are typically easy to
exploit using Shannon's communications theory to provide an arbitrary
reliability at a given bandwidth based on the channel bandwidth and signal
to noise ratio of the covert channel. Avoiding detection depends primarily on
remaining below the detection threshold used by detection techniques to try
to detect covert channel activity.

Errors are induced into systems to reveal values stored in
those systems. Examples include recent demonstrations of methods for
inducing errors so as to reveal keys stored in smart-cards and other similar
key-transportation devices, the introduction of multiple errors into
redundant systems so as to cause the redundancy to fail, and the
introduction of errors designed to cause systems to no longer be used in
critical applications.

Complexity: The complexity of error insertion is not
known, however many researchers have recently claimed to have produced
efficient and reliable insertion techniques. Th mathematics in this area is
quite new and definitive results are still pending.

Reflexive reactions are exploited by the attacker to induce
desired behaviors. Examples include the creation of attacks that appear to
come from a friend so as to cause automated response systems to shut down
friendly communication, induction of select flaws into the power grid so as
to cause SCADA systems to reroute power to the financial advantage of select
suppliers, and the use of forged or interrupted signals so as to cause
friendly fire incidents.

Complexity: The concept of reflexive control is
easily understood, and for simplistic automated response systems, finding
exploitations appears to be quite simple, but there has been little
mathematical work in this area (other than general work in control theory)
and it is premature to assess a complexity level at this time. In general,
it appears that this problem may be related to the problems in producing and
analyzing cascade failures in that causing desired reflexive reaction with a
reasonable degree of control may be quite complex.

Interdependencies of systems and components are analyzed so as
to determine indirect effects and attack weak points upon which strong
points depend. Examples include attacking medical information systems in
order to disrupt armed forces deployments, attacking the supply chain in
order to corrupt information within an organization, and attacking power
grid elements in order to disrupt financial systems.

Complexity: The
analysis of dependencies appears to require substantial detailed knowledge of
an operation or similar operations. Finding common critical dependencies
appears to be straightforward, but producing desired and controllable
effects may be more complex. Mathematical analysis of this issue has not
been published to date. Common mode faults and systemic flaws are of
particular utility in this sort of attack.

Interprocess communications channels are attacked in order to
subvert normal functioning. Examples include the introduction of false
interprocess signals in a network interprocess communications protocol
causing misbehavior of trusted programs, the disruption of interprocess
communications by resource exhaustion so as to prevent proper checking or
reduce or eliminate functionality, and observation of interprocess
communications stored in shared temporary data files so as to gain
unauthorized information.

Complexity: Interprocess communication attacks
oriented toward disruption appear to be easily accomplished, but no
mathematical analysis of this class of attacks has been published to date.

Attack detection based on thresholds of activity that
differentiate between attacks and similar non-malicious behaviors is
exploited by launching attacks that operate below the detection threshold.
Examples include breadth-first password guessing attacks, breadth-first port
scanning attacks, and low bandwidth covert channel exploitations.

Complexity: Remaining below detection thresholds is straightforward if the
thresholds are known and not possible to guarantee if they are unknown. In
most cases, estimates based on comparable policies or widely published
standards are adequate to accomplish below-threshold attacks.

The transitive trust relationships created by
peer-networking are exploited so as to expand privileges to the transitive
closure of peer trust. Examples include the activities carried out by the
Morris Internet virus in 1988, the exploitation of remote hosts (.rhosts)
files in many networks, and the exploitation of remote software distribution
channels as a channel for attack.

Complexity: Exploiting peer relationships
appears to be easily accomplished, requiring only a cursory examination of
history for a set of candidate peers and trial and error for exploitation.

Unchanged default values set into systems at the factory or in
a standard distribution process are known to and exploited by attackers to
gain unauthorized access. Example include default passwords, default
accounts, and default protection settings.

Complexity: It may be quite
difficult to create a comprehensive lists of appropriate defaults for any
nontrivial system because the optimal settings are determined by the
application. No substantial mathematics has been done on analyzing the
complexity of finding proper settings, but many lists of improper defaults
published for select operating systems appear to require only linear time
and space with the number of files in a system in order to verify and
correct mis-settings.Attack87:piggybacking -

Exploiting a (usually false) association to gain advantage.
Examples include walking into a secure facility with a group of other people
as one of the crowd, acting like an ex-policeman to gain intelligence about
ongoing police activities, and adding a floppy disk to a series of floppy
disks delivered as part of a normal update process.

Complexity: No
published measures of complexity for piggybacking attacks have been made to
date, however, certain types of these attacks appear to be trivially carried
out.

Collaboration of
several parties or identities in order to misuse a system. Examples include
creation of a false identity by one party and entry of that identity into a
computer database by a second party, provision of attack software by an
outsider to an insider who is participating in an information theft,
partitioning of elements of an attack into multiple parts for coordinated
execution so as to conceal the fact of or source of an attack, and the
providing of alibis by one party to another when the collaborated in a
crime.

Complexity: Collaborative misuse has not been extensively analyzed
mathematically, but limited analysis has been done from a standpoint of
identifying effects of collaborations on leakage and corruption in POset
networks and results indicate that detecting or limiting collaborative
effects is not highly complex if the individual attacks are detectable.

Interdependent
sequences of events are interrupted by other sequences of events that
destroy critical dependencies. Examples include the change of conditions
tested in one step and depended upon for the next step (e.g., checking for
the existence of a file before creating it interrupted by the creation of a
file of the same name by another owner), changes between one step in a
process and another step assuming that no such change has been made (e.g.,
the replacement of a mounted file system previously loaded with data in a
start-up process), and waiting for non-locked resources available in one
step but not in the next (e.g., the mounting of a different tape between an
initial read-through and a subsequent restoration).

Complexity: Race
conditions are not easy to detect. In general, they require at least
NP-complete time and space and may require factorial time in some cases.
Some automated analysis tools have been implemented to detect certain
classes of race conditions in source code and have shown promise.

Deceptions are generally categorized as comprising of
concealment, camouflage, false and planted information, ruses, displays,
demonstrations, feints, lies, and insight (as described in [Dunnigan95] Jim
(James F.) Dunnigan and Albert A. Nofi, Victory and Deceit - Dirty Tricks
at War, William Morrow and Co., 1995.) Examples include the creation of a
questionnaire asking for detailed information security backgrounds under the
auspices of a possible contract used to determine what expertise is
available at a particular company to defend against a particular type of
attack (a ruse), the creation of a false front organization such as a
garbage collection business in order to gain access to valuable information
often placed in the trash (camouflage) and the claim of having special
capabilities in your upcoming product in order to force other vendors to
work in that area even though you never intend to enter into it (a feint).

Complexity: In general deceptions comprise a complex class of techniques,
some subclasses of which are known to be undecidable to detect and trivial
to create, other subclasses of which of which have not been analyzed.

Many attacks combine several techniques synergistically in
order to affect their goal. Examples include exploiting an emergency
response to a flood to gain entry into a terminal room where password
guessing gains entry into a system and subsequent data diddling alters
billing records, the use of a virus to create protection missetting which
are subsequently exploited by planting a Trojan horse to allow reentry and
the creation of fictitious people in key offices who are automatically
granted access to appropriate systems (process bypassing) to allow the
attacker access to other systems, and the creation of an attractive Web site
designed to exploit users who visit it by sending their browsers
content-based attacks that set up covert channels through firewalls and
extend access through peer network relationships to other systems within the
victim's network.

Complexity: Combinations and sequences of attacks are at
least as complex as their individual components, and may be more complex to
create in coordination. Detection may be less complex because detection of
any subset or subsequence may be adequate to detect the combined attack.
This has not been studied in any mathematical depth to date.

Inherent
delays are exploited by creating a ring of events that chase each others'
tails, thus creating the dynamic illusion that things are different the
static case would support. Examples include check kiting schemes where
delays in processing checks causes temporary conditions where the sum of the
balances indicated in a set of accounts is far greater than the total amount
of money actually invested, techniques for avoiding payments of debts for a
long time based on legally imposed delays in and rules regarding the
collection of debts by third parties, and the use of revoked keys in key
management systems without adequate revocation protocols.

Complexity: The
complexity of kiting schemes has not been mathematically analyzed in
published literature to date, but indications from actual cases are that
substantial computing power is required to track substantial kiting schemes.
The first case where such a scheme was detected and prosecuted was detected
because the kiter's computer failed for a long enough period of time that
the set of transactions and delays could no longer be tracked - the kite fell
out of the sky.

Many small transactions are used together for a larger
aggregated effect. Examples include taking round-off error amounts from
financial interest computations and adding them to the thief's account
balance (resulting in no net loss to the system), the slow leakage of
information through covert channels at rates below normal detection
thresholds, and economic intelligence gathering efforts involving the
aggregation of small amounts of information from many sources to derive an
overall picture of an organization.

Complexity: Attacks of this sort are
relatively easy to create. Mathematical analysis of the general class of
salami attacks has not been done but it seems likely to be similar in
complexity to analysis of data aggregation effects.

A
transaction or other operation is repudiated by the party recorded as
initiating it. Examples include repudiating a stock trade, claiming your
account was broken into and that you didn't do it, and asserting that an
electronic funds transfer was not done.

Complexity: Repudiation has been
addressed with cryptographic techniques, but for the most part, these
techniques are easily broken - in the sense that a person wishing to
repudiate a future transaction can always act to make repudiation
supportable. In stock trades, this problem has been around for a long time
and is primarily addressed by recording all telephone calls (which form the
basis for each transaction) and using the recorded message to resolve the
issue when a disagreement is identified.

Mechanisms Which May Prevent, Limit, Reduce, or Mitigate Harm (Defenses)

Strong change control requires that research and development be
partitioned from production and that an intervening change control area be
put in place that (1) prevents unauthorized changes from passing, (2)
verifies the propriety of all changes, (3) tests all changes against sample
production data, (4) only passes verified source code from research and
development to production, (5) verifies that all changes are necessary for
their stated purpose, (6) verifies that all changes are suitable to their
stated purpose, (7) verifies that all changes clearly accomplish their
stated purpose and no other purpose, and (8) verifies that the purpose is
necessary and appropriate for the production system and authorized by
management.

Complexity: Strong change control typically tripples programming costs, does
not permit rapid changes, and prohibits the use of macros and other similar
general-purpose mechanisms now widely embedded in software environments. It
also prohibits programming in the production system, requires a strong
management commitment to the effort, and is limited to environments where
the cost and inconvenience is acceptable. Attack13

Effective destruction of waste products so as to make the cost
of exploiting those waste products commensurate with the value of keeping
those waste products from being exploited. Examples include:

shreading of waste paper,

destruction of electromagnetic media,

removal and destruction of labels and other related information, and

the introduction of noise to reduce the signal-to-noise ratio.

Complexity: In most cases, well known and reasonably inexpensive techniques
are available for destruction of waste. Cost and complexity goes up for
destruction of some materials in a very short time frame with very high
assurance, but this is rarely necessary.

the planting of false information that is easily traceable to the method of dissemination if discovered,

the use of sensors in waste storage and processing areas to detect and observe activities carried out there, and

auditing and testing of the destruction process.

Complexity: Detecting some
classes of waste examination may be quite complex. For example, detecting
the gathering of waste product through collection of electromagnetic
emanations may be quite hard. In most cases, relatively low-cost solutions
are available.Defense4:sensors -

Complexity: The cost of
performing background checks can range from as low as a few hundred dollars
to tens of thousands of dollars, depending on the depth and level of detail
desired and the activities involved in the individual's history.

Complexity: Dispite more
than fifteen years of substantial theoretical and development efforts and
hundreds of millions of dollars in costs, almost no systems to date have
been built that provide fully effective mandatory access control for general
purpose computing with reasonable performance. This appears to involve many
undecidable problems and some theoretical limitations that appear to be
impossible to fully resolve. Examples of unsolvable problems include perfect
access control decisions and non-fixed shared resources without covert
channels. Highly complex problems include viruses, data aggregation controls,
and unlimited granularity in access control decision-making.

Automated programs check protection
settings of all protected resources in a system to verify that they are
properly set. Examples include:

several Unix-based tools,

NID and similar multi-platform tools, and

network security management tools.

Complexity: If proper protection settings can be decided by a fixed time
algorithm, it takes linear time to check (and/or set) all of the protection
bits in a system. Making a decision of the proper setting may be quite
complex and may interact in non-trivial ways with the design of programs. In
many cases, commonly used programs operate in such a way that it is
impossible to set privileges properly with regard to the rest of the system
- for example database engines may require unlimited read access to the
database while internal database controls limit access by user. Since
external programs can directly read the entire database, protection should
prohibit access by non-privileged users, but since the database fails under
this condition, protection has to be set incorrectly in order for other
functions to work.

Trusted programs may be used to implement or interact with
applications. Examples include:

secure Web and Gopher servers designed to provide secure versions of commonly used services,
[Cohen97]

trusted mail guards used to permit information flow that
would be in violation of policy in a Bell-LaPadula-based system if not
implemented by a trusted program, and

most device drivers written for secure operating systems.

Complexity: Trust is often given but rarely fully deserved - in programs
that is. The complexity of writing and verifying a trusted program is at
least NP-complete. In practice, only a few hundred useful programs have
ever been proven to meet their specifications and still fewer have had
specifications that included desirable security properties.

Portions of a file system are temporarily isolated for
the purpose of running a set of processes so as to attempt to limit access by
those processes to that subset of the file system. Examples include:

the Unix Chroot environment,

mandatory access controls in POset configurations, and

VM's virtual disks.

Complexity: Implementing this functionality is not very
difficult, but implementing it so that it cannot be bypassed under any
conditions has proven unsuccessful. There appears to be no fundamental
reason that this cannot be done, but in practice, interaction with other
portions of the system is almost always required - for example - in order to
perform input and output, to afford interprocess communication, and to use
commonly available system libraries.

Each user, group of users, function, program, or other consumer
of system resources is granted limited resources by the setting of quotas
over the use of those resources by the consumers. Examples include:

disk quotas available in many operating systems,

CPU cycle quotas imposed in some
environments on a usage per time basis,

memory usage quotas imposed on many timesharing systems, and

file handle quotas commonly used in timesharing systems.

Complexity: Implementing quotas is straight forward and relatively reliable,
but it is rarely used in distributed computing environments because of the
control complexity and it is widely considered Draconian by users who
encounter qouta limits when trying to do legitimate work.

Resource usage is prioritized so as to fail to
provide proper resources only under conditions where success is not
possible. Examples of avoiding improper prioritization abound, but only
theoretical solutions typically exist for achieving optima.

Complexity: The resource allocation problem is well known to be at least
NP-complete, and in practice, optimal resource allocation is not feasible.
Resource limitations are usually addressed by providing more and more
resources until the problems go away - but under malicious attack, this
rarely works.

Redundancy of some sort is used to detect faults before they
result in failures. Examples include:

multi-version systems,

coding-based fault detection, and

testing for faults.

Complexity: The general area of
fault detection includes many NP-complete, exponential, and factorial
problems. The complexity of many other such problems, however, is well
within thenormal computing capability available today. These issues have to
be addressed on a case-by-case basis.

Complexity: Physical security
is expensive and never perfect, but without physical security, any other
type of information protection is infeasible. The best attainable physical
security has the effect of restricting attacks to insiders acting with expert
outside assistance.

Complexity: Redundancy costs, and more redundancy costs more. If privacy is
to be retained, redundant systems increase the risk of exposure and must also
be protected. Avoiding common-mode failures requires that redundancy be
implemented using separate and different methods, and this increases costs
still further. Redundancy must be analyzed and considered on a case-by-case
basis.

Information is transformed into a form which obscures the
content so that the attacker has difficulty understanding it. Examples
include:

DES encryption for protecting transmitted information,

RSA cryptosystem for exchanging session keys over an open channel, and

the one-time-pad which provides perfect secrecy if properly implemented.

Complexity: Shannon's 1949 paper
[Shannon49] on
cryptanalysis asserted that, with the exception of the perfect protection
provided by the on-time-pad, cryptography is based on driving up the
workload for the attacker to break the code. The goal is to create
computational leverage so that the encryption and decryption process are
relatively easy for those in posession of the key(s) while the same process
for those without the key(s) is relatively hard. Proper use of cryptography
requires proper key management, which in many cases is the far harder
problem. Encryptions algorithms which provide the proper leverage are now
quite common.

Protocols that tend ro reduce the traffic volume over time are
used to prevent positive feedback in network traffic. Examples include:

protocols that inherently produce less output than input,

protocols designed to detect loops and eliminate them, and

protocols that guarantee that the acknowledgement process produces a bounded number of packets.

Complexity: The complexity of
analyzing a protocol for being overdamped is closely related to livelock and
deadlock analysis which is probably NP-complete for most current protocols.
Nevertheless, protocol analysis is a well-studied field and it is feasible
to assure that protocols are overdamped.

When faults are detected, faulty components are isolated from
non-faulty components so that the faults do not spread. Examples include:

the partitioning of corporate networks from the Internet during the Morris Internet virus,

the normal partitioning of faulty components of the power grid from the rest of the grid, and

the partitioning of the telephone system into critical and non-critical circuits during a national emergency.

Complexity: If designed into a system, fault isolation is feasible. In cases
where fault isolation was not previously considered, a lot of effort may be
required to implement isolation - primarily because nobody knows the list of
links to be cut in order to form the partition, the location of the physical
links that need to be severed, or the effect of partitioning on the two
subsets of the network.

pointers to locations outside of the user's addressible space might all be detected.

Complexity: In properly designed
systems, legitimate ranges for values should be known and violations should
be detectable. Many systems with limited detection capability turn off
bounds and similar checking for performance reasons, but few make the
implications explicit.

People are made aware of and trained about how to respond to
attack techniques. Examples include training and awareness seminars, in-house
awareness programs, and demonstrations wherein users are shown how they might
be vulnerable.

Complexity: It's pretty straight forward to make people
aware and provide them with defenses against most classes of attacks
directed toward them.

Organizations provide specifications of what is expected and how
governance is to be affected. Examples include a clear usage policy with
details of how it is enforced and what happens to those who break the rules,
a policy against using illegal copies of software so as to reduce liability,
and clear specification of who is responsible for what functions.

Complexity: Making and disseminating policy has organizational and
interpersonal complexity, but is easily implemented.

Attacks are shunted away from the most critical systems.
Examples include honey pots used to lure attackers away from real targets
and toward false and planted information, lightning rods used to create
attractive targets so that attackers direct their energies away from real
targets, shunts used to selectively route attacks around potential targets,
and jails used to encapsulate attackers during attacks and gather information
about their methods for subsequent analysis and exploitation.

Complexity: All of these techniques are easily implemented at some level.

Specific identified methods are specified to implement
protection in the hope that they have been well studied and there is a
community investment in their use. Examples include the use of X.509
certificates for interoperable key-managed encrypted data transport, Orange
book B1 approved systems for increased operating system security assurance,
and ISO-9000 processes for high-quality industrial-grade quality assurance.

Complexity: Standards tend to reduce the complexity of meeting assurance
requirements by structuring them and sharing the work. They also tend to
make interoperability easier to attain.

Specific identified methods are applied in specific ways to
implement protection in the hope that, by uniformly applying these methods,
they will be effective. Examples include the use of checklists to verify
proper status during preflight, regularly performing backups to assure that
information is not inadvertently lost, and a standard method for dealing
with bomb threats made over the telephone.

Complexity: Developing and
implementing standard procedures is not difficult and can be greatly aided by
the use of standard procedure notebooks, checklists, and other similar
aides.

Events of potential security relevance are generated. Examples
include audit records made by financial systems and used to identify
fraudulent use, sign-in sheets used to identify who has entered and/or left
an area and at what times, and computer-generated audit records that track
logins, program executions, file accesses, and other resource uses.

Complexity: Generating audit records is not difficult. Care must be taken to
secure the audit records from illicit observation and disruption and to
prevent audit trails from using excessive time or space.

Audit trails are analyzed in order to detect record sequences
indicative of illicit or unexpected activities. Examples include searching
for indicators of known attacks that appear in audit records, sorting and
thresholding of audit trails to detect patterns of misuse, and the
cross-correlation of audit records to detect inconsistencies.

Complexity: Analyzing audit trails can be quite complex and several
NP-complete audit analysis problems appear to have been found.

Indicators of misuse are analyzed in order to detect specific
sequences indicative of misuse. Examples include audit-based misuse
detection, analysis of system state to detect misset values or unauthorized
changes, and network-based observation of terminal sessions analyzed to
detect known attack sequences.

Complexity: In general, misuse detection
appears to be undecidable because it potentially involves detecting all
viruses which is known to be undecidable.

Patterns of behavior are tracked and changes in these patterns
are used to indicate attack. Examples include detection of excessive use,
detection of use at unusual hours, and detection of changes in system calls
made by user processes.

People are brought into a level of moral agreement and
awareness that prevents them from doing the wrong thing. Examples abound,
but situation ethics classes, improved morality training, and ethics classes
have been used to affect this goal.

Making people aware of the penalties and consequences of actions
is used to disuade improper behavior. Examples include briefings on the
people who have been caught and punished, details of what has happened to
innocent victims as a result of previous breached of protection, and personal
appearances by people who have been involved in the process.

Protection is reassessed periodically to update methods so as
to reflect changes in technology, personnel, and other related factors.
Examples include periodic reviews, annual outside audits, and regular testing.

Responsibilities and privileges should be allocated in such a
way that prevents an individual or a small group of collaborating
individuals from inappropriately controlling multiple key aspects of a
process and causing unacceptable harm or loss. Examples include limiting
need to know areas for individuals, eliminating single administrative points
of failure, and limiting administrative domains.

Complexity: Analyzing and
implementing such controls are not diffucult but may involve increased
cost.

Functions of systems and networks should be separated in such a
way that prevents individual or common-mode faults from producing
unacceptable harm or loss. Examples include the use of separate and
different media for backups, the division of interlinking functional
elements over a set of neet-to-know areas, and

Complexity: In practice,
efficiency often takes precidence over effectiveness and separation of
function is typically placed lower in priority.

More than one person is required in order to enact a critical
function. Examples include controls to require that two people
simultaneously urn keys in order to launch a weapon, cryptographic key
distribution systems in which keys are shared among many individuals, a
subgroup of which is required to participate in order to grant access, and
voting systems in whicch a majority must agree before an action is taken.

Complexity: Although some such systems are quite complex, there is no
fundamental complexity with such a system that prohibits its use when the
risk warrants it.

Multiple program versions are independently developed in order
to reduce the likelihood of similar faults creating catastrophic failures.
Examples include the redundant software used to control the dynamically
unstable Space Shuttle during reentry, software used in other critical space
systems, and software used in safety systems.

Complexity: This technique
typically multiplies the cost of implementation by the desired level of
redundancy and substantially increases the requirement for detailed
specification.

Hard-to-guess passwords (something you know) are used to make
password guessing difficult. Examples include automatically generated
passwords, systems that check passwords when entered, and systems that try
to guess passwords in an effort to detect poor selections.

Complexity: There are no difficult complexity issues in this technology, but
there are some human limitations that limit the value of this technique.

Something you have or can do is used
to augment normal authentication, typically by a time or use variation in the
authentication string. Examples include the Secure-ID authentication device
and similar cards, algorithmic authentication, and challenge-response
cards.

Complexity: Dispite several potential vulnerabilities associated
with these devices, they are basically encryption devices used for
authentication and the complexity issues are similar to those involved in
other areas of cryptography.

Unique or nearly-unique characteristics of the individual
(something you are) is used to authenticate identity. Examples include
finger prints, voice prints, retinal blood vessel patterns, DNA sequences,
and facial characteristics.

Complexity: While the overall field of
biometric identification and authentication is quite complex, these methods
are normally used to authenticate idenitity or differentiate between
identities from relatively small populations (typically a few thousand
individuals), and this process is feasible with fairly common data
correlation techniques.

Authorization is limited in time, space, or otherwise. Examples
include limitations of when a user can make certain classes of requests,
where requests can be made from, and how much information an individual may
attain.

Complexity: Limiting authorization is not complex, however,
arbitrarily complex restrictions may be devised and current techniques do
not provide for consistency or completeness checks on complex authorization
limitation rules.

Markings and/or labels are used to identify the sensitivity of
information. Examples include electronic marking within the computer
systems, physical marking on each sheet of paper, and labeling of tapes and
other storage media.

Complexity: Labeling is easy to do, but requires systematic procedures that
are rigorously followed.

Information is classified as to its sensitivity when it is
created. Examples include the standard classification processes in most
large businesses and government agencies, the automated classification of
information in trusted systems based on the information it is derived from,
and the declassification process performed by the government in order to
periodically reassess the sensitivity of information.

Complexity: The process of determining proper classificiation of information
is quite complex and normally requires human experts who are trained in the
field.

By designing hardware, software, and/or networks more securely,
attacks are made more difficult. Examples include the use of proven secure
software, verified designs, and inherently secure architectures.

Complexity: It's much harder to make things secure than to make them
functional. Nobody knows exactly how much harder, but there is some notion
that making something secure might imply verifying the security properties,
and this is known to be at least NP-complete.

Complexity: Testing abounds with complexity issues. For example complete
tests are almost never feasible and methods for performing less-than
complete tests have a tendency to leave major missing pieces.

Looking for known attack sequences as indicated by state or
audit information. Examples include virus scanning and pattern matching in
audit trails against known attack signatures, and virus monitors that check
each program for known viruses at load-time.

Complexity: This class of
detection methods is almost always used to identify a finite subset of an
infinite class of attacks, and as such is only effective against commonly
used copies of known attacks.

Cryptographic checksums or secured modification-time
information are used to detect changes to information just before the
information is interpreted. Examples include several products on the market
and experimental systems.
[Cohen88-2][Cohen88]

Complexity: The
objective of an integrity shell is to drive up the complexity of a
corruptive attack by forcing the attacker to make a modification that
retains an identical cryptographic checksum. The complexity of attack is
then tied to the complexity of breaking the cryptographic system.

Access control is provided over increasingly smaller data sets,
perhaps down to the bit level. Examples include database field-based access
controls, record-by-record controls, and multi-level secure markings of
portions of documents.

Complexity: In general, determining accessibility of
data analyzed by Turing capable programs is an NP-complete problem,
[Denning82] and in some cases it may be undecidable.

Changes are controlled so as to increase the assurance that
they are valid. Examples include change control systems for software,
roll-back and back-out capabilities for updates, and strong change control
processes in use in select critical applications.

Complexity: Proper change
control demands that, in the production system, no programming capability be
available. The verification of the propriety of changes is complex and, in
general, may be comperable to proof of program correctness which is well
known to be at least NP-complete.

Configurations are managed so as to eliminate known
vulnerabilities and assure that configurations are in keeping with
policies. Examples include many configuration management tools now
available for implementing protection policy across a wide range of
platforms, menu-based tools used to set up and administer systems, and tools
used to configure and manage firewalls.

Complexity: Configuration
management normally requires a tool to describe policy controls, a tool to
translate policy into the methods available for protection, and a set of
tools which implement those controls on each of the controlled machines. In
some cases, policy may be incommensurable with implemented protection
capabilities, in other cases, proper configuration may require a substantial
amount of effort, and the process of changing from one control setting
to the next may introduce unresolvable insecurities.

Entry points or functions are locked to prevent their use -
typically in response to an identified threat. Examples include the use of
physical locks used to prevent an intruder from escaping, lockout of computer
accounts based on incorrect authentication, and lockouts used to prevent
people from using systems or network components while under maintenance.

Complexity: The major complexity in lockouts comes when they are used
automatically. This leads to the possibility for enemy use reflexive
control to cause denial of services. Analyzing this class of behaviors is
quite complex.

A secured facility is provided for putting valuable information
or documents so that it cannot be removed or can only be removed by
authorized individuals. Examples include the IBM Abyss processor for secure
network-based processing, password analysis components that take in
passwords and reveal only whether they were right or wrong, and drop boxes
used to place money in for night deposit.

Packets of information are authenticated. Typical examples
include the use of cryptographic checksums, routing information, redundancy
inherent in data to test the authenticity and consistency of information.

Complexity: Complexity issues are typically peripheral to the concept of
authentication of packets but the use of computational leverage is
fundamental to the effectiveness of these techniques against malicious
attackers.

Physical characteristics of information, systems, components,
or infrastructure is analyzed to detect deviations from expectations.
Examples include the use of time domain reflectometers to detect wiring
changes and listening devices, the use of pressure and gas sensors in pipes
containing critical wiring to detect breaches of the enclosure, and the
analysis of dollar bills to detect forgeries.

Complexity: No fundamental complexity limitations are inherent in the use of
physical techniques, however, they may be quite difficult to use in
networked environments over which physical processes can not easily be
controlled.

Authentication information is encrypted in storage or during
transit. Examples include the encryption of plaintext passwords passed over
networks, the encryption of authenticating information that uniquely
identifies a physical item such as a piece of paper by minor deviations in
it's surface, and the distribution of authentication over multiple paths
to reduce path dependency.

Complexity: The basic issues in encrypted authentication are how to use
encryption to improve the effectiveness of the process and and what
encryption algorithm to use to attain the desired degree of effectiveness.

Protection against the exploitation of electromagnetic
emanations. Examples include Faraday cages to reduce emanations in the
appropriate frequency ranges, the use of special low-emanations components,
power levels, and design, and the filtering of power supplies.

Perimiter areas are widened or enhanced so as to provide
suitable protection. Examples include increased distance, improved strength
or quality in the perimiters, and the use of multiple perimiters in
combination such as the combination of a moat and a wall.

Complexity: There may be substantial cost involved in increasing or
enhancing perimiters and it may be hard to definitively determine when the
perimiter is strong enough or wide enough.

Noise is injected in order to reduce the signal to noise ration
and make compromise more difficult. Examples include the creation of
deceptions involving false or misleading information, the induction of
electrical, sonic, and other forms of noise to reduce the usability of
eminations, and the creation of active noise-reducing barriers to eliminate
external noises which might be used to cause speach input devices to take
external commands or inputs.

Complexity: Noise injection is fairly simple to do, but it may be hard to
definitively determine when the noise level is high enough to meet the
threat.

Signals are used to disrupt communications. Examples include
desynchronization of cryptographic systems by induction of bit or signal
alterations, the introduction of erroneous bits at a rate or distribution
such that checksums or CRC codes are unable to provide adequate coverage, and
the intruduction of noise signals in a particular frequency range so as to
reduce effective bandwidth for a particular communications device or system.

Complexity: The basic issue in jamming is related to the efficient use of
limited energy. With enough energy, almost any signal can be overwhealmed,
but the amount of energy required to overwhealm signals across a wide
frequency band, it may take more energy than is easily available, and it is
normally relatively easy to locate and destroy such a large noise generation
system. The complexity then lies in determining the most effective use of
resources for jamming.

The use of frequency-hopping radios or devices to reduce the
effect of jamming and complicate listening in on communications. Examples
include spread-spectrum portable telephones now available in for homes and
spread spectrum radios used in the military.

Complexity: The basic issues in spread spectrum are how to rapidly change
frequencies and how to schedule frequency sequences so as to make jamming
and listening difficult. The former is an electronics problem that has been
largely solved even for low-cost units, while the latter is a cryptographic
problem equivalent to other cryptographic problems.

Multiple paths are used to reduce the dependency on a single
route of communications or transportation. Examples include the use of
different motor routes selected at random near the time of use to avoid
hijacking of shipments, the use of multiple network paths used to assure
reliability and increase the cost of jamming or bugging, and the use of
multiple suppliers and multiple apparent delivery names and addresses to
reduce the effect of Trojan horses placed in hardware.

Complexity: The basic issues in path diversity are how to rapidly change
paths and how to schedule path sequences so as to make jamming and listening
difficult. The former is a logistics problem that has been solved in some
cases, and the latter is a cryptographic problem equivalent to other
cryptographic problems.

Multiple signals from different views are coordinated in order
to uniquely locate it. Examples include signal triangulation to locate ships
on Earth (a near-plane), quadrangulation used in the global positioning
system (GPS) to locate a point in 3-space (plus time), and the use of
multiple audit trails in a network infrastructure to locate the source of a
signal or malicious bit-stream.

Complexity: While Euclidean spatial location is fairly straight forward,
additional mapping capability is required in order to use this technique in
cyber-space, and the number of observation points and complexity of location
has not been definitively published. This would appear to be closely
related to mathematical results on finding cuts that partition a network.

Conductive enclosures are used to eliminate emanations and
induced signals from an enclosed environment. Examples include the use of
tempest shielding to reduce eminations from video display units, the use of
wire mesh enclosures to eliminate emanations from small installations and
computer rooms, and the use of metalic enclosures to prevent electromagnetic
pulse weapons from disabling enclosed systems.

Complexity: The only complexity involved in the design of Faraday cages comes
from determining the frequency range to be protected and calculating the size
of mesh required to eleiminate the relavent signals. Some care must be
taken to assure that the shielding is properly in place and fully
effective.

Auditors examine an information system to provide an opinion on
the suitability of the protective measures for effecting the specified
controls. Examples include the use of internal auditors and external
auditors.

Complexity: Audits tend to be quite complex to perform and require that the
auditor have knowledge and skills suitable to making these determinations.

High bandwidth or privileged access ports and lines have
restricted access. Examples include protection of trunk telephone lines
from access by calls originating on outside lines, limiting access to
high-bandwidth infrastructure elements, and physically securing an Ethernet
within a building.

Complexity: This appears to be of minimal complexity but may involve
substantial cost or require special expertise.

Controls that limit the flow of information. Typical examples
include mandatory access controls (MAC) used in trusted systems, router
configuration which is often used to contain data between members of an
organization to remain within the organization, and information flow
controls based on work models.

Maintenance ports are disconnected or controlled so as to
effectively eliminate unauthorized access. Example include the proper use of
cryptographic modems for covering maintenance ports, the connection of
maintenance ports only when maintenance is underway and properly supervised,
and the elimination of non-local use of maintenance functions.

Complexity: This is a relatively simple matter except in large networks or
locations where physical access is difficult, expensive, or impossible.

People's mindset is oriented toward protection issues.
Examples include organizations with effective protection awareness programs,
organizations with a strong tradition and culture of protection, and
individuals with a passion for information protection.

Complexity: Dealing with peolpe is always a complex business, but fostering
a protection-oriented attitide is not usually difficult if management is
supportive.

Peripherals such as microphones and cameras are provided with
switches capable of definitively disabling their operation, while disks are
physically write locked and equipment may use smoke and dust filters.
Examples include covers over video cameras, switches that electrically or
mechanically disconnect microphones, and keyed disconnects for keyboards,
mice, and displays.

Complexity: This is not a complex issue to address, but substantial costs
may be involved if implemented through a retrofit.

Trusted groups of individuals working in teams perform repairs
so as to mitigate the risks of the repair process introducing corruptions,
leaks, or denial. Examples include cleared repair people and specially
contracted cleared repair facilities.

Inventory is tracked at a granularity level sufficient to be
able to identify the relevant history of the relevant parts. Examples include
physical inventory control systems, inventory audits, and on-line inventory
control over information.

Complexity: Inventory control is not a very complex process to do well, but it requires
a consistent level of effort and a systematic approach.

Physical or informational goods are delivered in such a way as
to assure security properties. Examples include cryptographic checksums
used to seal software distributed over public communications systems,
special bonded couriers using secure carying cases, and buying through
phoney or friendly corporations so as to conceal the real destination of a
shipment.

Complexity: Some aspects of secure distribution, such as the elimination of
covert channels related to bandwidth utilization, can be quite complex - in
some cases as complex as a cryptography in general, while other aspects are
relatively easy to do at low cost.

The management of keys in such a way as to retain the security
properties those keys are intended to assure. Examples include physical key
audits, anti-duplication measures, and periodic rekeying, public-key
automated key generation and distribution systems, and analysis of physical
traits on keys for integrity verification.

Complexity: Key management is one of the least understood and hardest
problem areas in cryptography today, and has been the cause of many
cryptosystem failures - perhaps the most widely publicized being the
inadequate key management by Germany during World War II that led to rapid
decoding of Enigma cyphers. Physical key management is equally daunting and
has led to many lock and key design schemes. To date, as far as can be
determined from the available literature, no foolproof key management scheme
has been devised.

Complexity: Lock technology has been evolving for thousands of years, and
the ideas and realizations varry from the simple to the sublime. In general,
locking mechanisms can be as complex as general informational and physical
properties.

Secure channels are specially devised communications channels
between parties that provide a high degree of assurance that communications
security properties are in place. Examples include trusted channels between
a user and an operating system required for the implementation of trusted
computing bases, cryptographically secured communication channels used in a
wide range of circumstances, and physically secured communications paths
controlled by gas-filled tubes which lose their communications properties if
altered.

Complexity: In general, trusted channels can be as complex as general
informational and physical properties.

Function is limited so as to prevent exploitation of Turing
capability or similar expansion of intended function. Examples include
limited function interfaces to systems, special purpose devices which
perform only specific functions dictated by their designers, and special
devices designed to fail under attempts to use them for other than their
intended purpose.

Complexity: It has been shown that systems with even a very limited set of
functions can be used to implement general purpose systems.
[Turing36]
For example, nor gates can be used to create any digital
electronic function, and, subject to tolerance requirements, general purpose
physical machines also exist.
[Cohen94-4]

Sharing of information is limited so as to prevent unfettered information flow.
Examples include the Bell-LaPadula security model, the Biba integrity model, Denning's Lattice
models, and Cohen's POset models.
[Bell73][Biba77][Denning75][Cohen87-2]

Complexity: Effectively limiting sharing with other than purely physical
means has proven to be a highly complex issue. For example, more than 20
years of effort has been put forth in the design of trusted systems to try
to achieve this goal and it appears that another 20 years will be required
before the goal is actually realized.

Limiting the ability of information to flow more than a certain
distance (perhaps in hops) from its original source. Examples include
special operating systems that track transitive information flow and limit
it and technologies that can only be shared a few times before losing their
operability.

Complexity: This is not a highly complex area, but there is a tendency for
information to rapidly reach its limit of transitivity and there are
restrictions on the granularity at which control can be done without
producing high mathematical complexity on time and space used to track
transitive flow.
[Cohen84][Cohen94-2]

Features known to be unsafe in a particular environment are
disabled. Examples include disabling network file systems operating over
open networks, disabling guest accounts, and disabling dial-in modems
connected to maintenance ports except when maintenance is required.

Complexity: This is not difficult to to, but often the feature that is
unsafe is used and this introduces a risk/benefit tradeoff. It is also
common to find new vulnerabilities and if this policy is followed, it may
result in numerous changes in how systems are used and thus create
operational problems that make the use of many features infeasible.

Information is authenticated as to source, content, and or
other factors of import. Examples include authenticated electronic mail
systems based on public-key cryptography, source-authenticated secure
channels based on authenticating information related to individuals, and the
use of third party certification to authenticate information.

Complexity: Redundancy required for authentication and the complexity of
high assurance authentication combine to limit the effectiveness of this
method, however, there is an increasing trend towards its use because it
is relatively efficient and reasonably easy to do with limited assurance.

The integrity of information, people, systems, and devices is
verified. Examples include detailed analysis of code, change detection with
cryptographic checksums, in-depth testing, syntax and consistency
verification, and verification of information through independent means.

Complexity: In general, the integrity checking problem can be quite complex,
however, there are many useful systems that are quite efficient and cost
effective. There is no limit to the extent to which integrity can be
checked and the question of how certain we are based on which checks we have
done is open. As an apprently fundamental limitation, information used to
differentiate between two otherwise equivalent things can only be verified
by independent means.

Resoureces are allocated so as to assure that anything that is
started can be completed with the allocated resources. Examples include the
use of overspecification to assure operational compliance, preallocation of
all required memory for processing in computer operating systems, and static
allocation rather than dynamic allocation in telecommunications systems.

Complexity: In general, the allocation problem is at least NP-complete. As
a far more important limitation, most systems are designed to handle 90th to
99th percentile load conditions, but the cost of handling worst case load is
normally not justified. In such systems, stress-induced failures are almost
certain to be possible.

Concealment is used to provide services only to those who know how
to access them. Examples include secret hallways with trick doors, menu items
that don't appear on the menu, and programs that don't appear in listings but
operate when properly called.

Complexity: This type of concealment is often referred to as security through
obscurity because the effectiveness of the protection is based on a lack of
knowledge by attackers. This is generally held to be quite weak except in
special circumstances because it's hard to keep a secret about such things,
because such things are often found by cleaver attackers, and because
insiders who have the special knowledge are often the sources of attacks.

Traps are devices used to capture an intruder or retain their
interest while additional information about them is attained. Examples
include man-traps used to capture people trying to illicitly enter a
facility, computer traps used to entice attackers to remain while they are
traced, and special physical barriers designed to entrap particular sorts of
devices used to bypass controls.

Complexity: In general, no theory of traps has been devised, however, it is
often easy to set traps and easy to evade traps, and, in legal cases,
entrapment may void any attempt at prosecution.

Content is verified to assure that it is within normal
specifications or to detect particular content.. Examples include
verification of parameter values on operating system calls, examination of
inbound shipments, and real-time detection of known attack patterns.

Complexity: In a limited function system, content checking is limited by the
ability to differentiate between correct and incorrect values within the
valid input range, while in systems with unlimited function, most of the key
things we commonly wish to verify about content are undecidable.

Trusted systems are systems that have been certified to meet
some set of standards with regard to their design, implementation, and
operation. Examples include systems and components approved by a
certification body under a published criteria and standard implementations
used for a particular purpose within a particular organization. While these
systems tend to have more effective protection than systems that are less
standard and not certified, the aura of legitimacy is sometimes unjustified.

Complexity: The certification process introduces substantial complexities
and delays, while the advantage of standardization comes from aneconomy of
scale in not having to independently verify the already certified properties
of a trusted system for each application.

Causing people to believe things that forward the goal.
Examples include the appearance of tight security which tends to reduce the
number of attacks, creating the perception that people who attack a
particular system will be caught in order to deter attack, and making people
believe that a particularly valuable target is not valuable in order to
reduce the number of attempts to attack it.

Complexity: It is always tricky trying to fool the opponent into doing
what you want them to do.

Typical deceptions include concealment, camouflage, false and
planted information, reuses, displays, demonstrations, feints, lies, and
insight.
[Dunnigan95]
Examples include facades used to misdirect attackers as to the
content of a system, false claims that a facility or system is watched by law
enforcement authorities, and Trojan horses planted in software that is downloaded
from a site.

Complexity: Deceptions are one of the most interesting areas of information
protection but little has been done on the specifics of the complexity of
carying out deceptions. Some work has been done on detecting imperfect
deceptions.

Information on methods used for protection can be protected to
make successful and undetected attack more difficult. Examples include not
revealing specific weaknesses in specific systems, keeping information on
where items are purchased confidential, and not detailing internal
procedures to outsiders.

Complexity: Many refer to this practice as security through obscurity.
There is tendency to use weaker protection techniques than are appropriate
under the assumption that nobody will be able to figure them out. History
shows this to be a poor assumption.

Auditors create independent systems that permit them to review
the content of a system under review with reduced dependency on the hardware
and software in the system under review. Examples include removing disks
from the system under review for review on a separate computer, booting
systems from auditor-provided start-up media in order to assure against
operating system subversion, and review of external information fed into and
generated by the system under review.

Complexity: It can be quite difficult to attain a high degree of assurance
against all hardware and software subversions, particularly in the cases of
storage media and deliberately corrupted hardware devices.

Special redundant devices are used to assure against systemic faults.
Examples include unitenrruptable power supplies, hot and cold standbye systems,
off-site recovery facilities, and n-modular redundancy.

Combinations of access are limited so as to limit the
combination of information available to an individual. Examples include the
Chinese walls used in the finacial industries to prevent traders from
gaining access to both future planning information and share value information,
need-to-know separation in shared computing bases, and access controls
in some firewall environments.

Complexity: Chines walls can become quite complex if theya re being used to
enforce fine-grained access controls and if they are expected to deal with
time transitivity of information flow.

Incident reports and mitigating actions are collected,
reported, correlated, and analyzed. Examples include analysis for patterns
of abuse, detection of changes in threat profiles, detection of low-rate
attacks, detection of increased overall attack levels, improvement of
response performance based on feedback from the analysis process, and the
collection and reuse of diagnostic and repair information.

Complexity: This is not a complex thing to do, but it is rarely done well.
In general, the analysis of information may be quite complex depending on
what is to be derived from it.

The number of copies of sensitive information is limited to
reduce the opportunity for its leakage. Examples include restrictions on
handling of sensitive information in most government and industry
facilities.

Complexity: There is a tradeoff between the advantage of retaining fewer
copies to prevent leakage and having enough copies to assure availability.

Each copy of sensitive information is numbered, cataloged, and
tracked to assure that no illicit copies are accidentally made and to add
identifying information to each copy to enable it to be traced to the
individual responsible for its handling.

Audit information is not kept in the control of the systems
about which the information applies. For example, audit information may be
immediately transmitted to a separate computer for storage and analysis, a
trusted computing base may separate the information from the areas that
generate it, or audit information may be written to a write-once output
device to assure against subsequent corruption or destruction.

Complexity: This is not complex to accomplish with a reasonable degree of
assurance, however, under high load conditions, such mechanisms often fail.

Facilities containing vital systems or information are kept in
low profile buildings and locations to avoid undue interest or attention.
Examples include computer centers located in remote areas, undistinctive and
unmarked buildings used for backup storage, and block houses used for storing
high valued information.

Complexity: This is not an inherently complex thing to do, but in many
companies, computer rooms are showcases and this makes low profiles hard to
maintain.

Traffic is limited to reduce the oportunity for diversions. Examples
include the separation of areas by partitions, separation of workers on different tasks
into different areas, and architecting floor and building areas so as to control flow.

Identifiable badges are worn by every authorized eprson in the
facility. Examples include electromagnetic badges that automatically open
locks to areas, smart-card badges that include electronic authentication
information, and the more common physical badges which use color, format,
seals, and similar authenticating information to assert authenticity.

Complexity: Badging technology can be quite complex depending on the control
requirements the badges are intended to fulfill.

Restrictions on who can get to what physical locations when.
Examples include gates requiring identification for entry, the use of locks
on wire closets, and secure areas for storage of material.

Complexity: Although the complexity of this area is not extreme in the
technical sense, there is a substantial body of knowledge on physical access
control.

Defense115:separation of equipment so as to limit damage from local events -

Physical distance and separation are used to prevent events
effecting one piece of information or technology from also effecting other
related ones. Examples include off-site storage facilities, physically
distributed processing capabilities, and hot sites for operation during
natural disasters and similar events. Physical separation is often mandated
by regulation, especially between classified and unclassified computers.

Complexity: Separation can be quite a complex issue because emanations can
travel over space and can be detected by a variety of means. Within a
secured area, for example, Sandia is required to maintain appropriate
separation between classified computing equipment and unclassified
equipment, phone lines, etc. The required separations are: 6 inches
between classified equipment and unclassified equipment, 6 inches between
classified equipment and unclassified cables, and 2 inches between
classified cables and unclassified cables. Cables include
monitor/keyboard/printer cables, phone lines, etc., but NOT power cables.
These separations are valid for classified systems up to and including SRD.

All material is examined when crossing some boundaries to
assure that corruption or leakage is not taking place. Examples include
examination of incoming hardware for Trojan horses, bomb detection, and
detection of improperly cleansed material being sent out, and

Complexity: Because information can be encoded in an unlimited number of
ways, it can be arbitrarily complex to determine whether inbound our outbound
information is inappropriate. This is equivalent to the general computer
viruse and Trojan horse detection problems.

Incomplete or obsolete data is superssed to prevent its
accidental use or intentional abuse. Examples include databases that remove
old records, record keeping processes that expunge data after statutory and
business requirements no longer necessitate its availability, and the destruction
of old criminal records by government when appropriate.

Complexity: Tracking obsolecense is rarely done well, however, if tracking
is properly done, supression is not complex.

Whenever sensitive information is sent or recieved, a procedure
is used to confirm the action with the other party. Examples include
reciepts for information, non-repudiation services in information networks,
and advance notice of inbound transmissions.

Complexity: Non-repudiation is a fairly complex issue in information
systems,typically involving complex cryptographic protocols. In general, it
is complex an issue as cryptography in general.

Each information asset is identified as to ownership and
stewardship and the owner and steward are held accountable for those
assets. Examples include document tracking, owner-based access control
decisions, and mandatory actions taken in cases where individuals fail
to carry out their responsibility.

Complexity: This is not complex to do, but may require substantial overhead.

Responsibility for protection is clear throughout the
organization. Examples include executive-level responsibility for overall
information asset protection, deliniated responsibility at every level orf
the organization, and spelled out responsibilities for individual
accountability.

Complexity: The challenges of organizational responsibility are substantial
because they involve getting people to understand information protection at
an appropriate level to their job functions..

Changes in programs are accounted for in detail. Examples
include automated change tracking systems, version control systems, and
full-blown change control systems.

Complexity: Change control is often too expensive to be justified and
is commonly underestimated. Sound change control is hard to attain and
approximately doubles the software costs of a system.
[Cohen94-2]

Names and other identifying information are kept confidential.
Examples include keeping the employee phone book and organizational chart
from being released to those who might abuse its content, keeping filenames
and machine names containing confidential data confidential to make them
harder to find, and keeping the names of clients, projects, and other
operational information confidential to prevent exploitation.

Complexity: Operations security is a complex field - in large part because
of the effects of data aggregation.

All applicable laws and regulations are followed. Examples include
software copyright laws, anti-trust laws, and regulations applying to the separation
of information between traders and investment advisors.

Binding legal agreements are used to provide a means of
recovering from losses due to neglegence or non-fulfillment. Examples
include insurance contracts relating to information assets, employee
contracts that specify the employee's responsibilities and punishments for
non-compliance, vendor and outsourcing agreements specifying requirements on
supplied resources and responsibility for failures in compliance.

Complexity: The complexity of the legal system is legend.

Defense125:time, location, function, and other similar access limitations -

Access is restricted based on time, location, function, and
other characterizable parameters. Examples include time locks on bank
vaults, access controls based on IP address, terminal port, or combinations
of these, and barriers which require some time in order to be bypassed.

Complexity: The only real complexity issue is in identifying the proper sets
of controls over the conditions for access and non-access for each
individual relateing to each system.

Measures, practices, and procedures for the security of information
systems should address all relevant considerations and viewpoints, including
technical, administrative, organizational, operational, commercial,
educational, and legal.
[GASSP95]

Complexity: Security is achieved by the combined efforts of data owners, custodians, and security personnel. Essential
properties of security cannot be built-in and preserved without other disciplines such as configuration
management and quality assurance. Decisions made with due consideration of all relevant viewpoints will be
better decisions and receive better acceptance. If all perspectives are represented when employing the least
privilege concept, the potential for accidental exclusion of a needed capability will be reduced. This principle
also acknowledges that information systems are used for different purposes. Consequently, the principles will
be interpreted over a wide range of potential implementations. Groups will have differing perspectives,
differing requirements, and differing resources to be consulted and combined to produce an optimal level of
security for their information systems.

Measures, practices, and procedures for the security of
information systems should be coordinated and integrated with each other and
with other measures, practices, and procedures of the organization so as to
create a coherent system of security.
[GASSP95]

Complexity: The most effective safeguards are not recommended individually,
but rather are considered as a component of an integrated system of
controls. Using these strategies, an information security professional may
prescribe preferred and alternative responses to each threat based on the
protection needed or budget available. This model also allows the developer
to attempt to place controls at the last point before the loss becomes
unacceptable. Since developers will never have true closure on specification
or testing, this model prompts the information security professional to
provide layers of related safeguards for significant threats. Thus if one
control is compromised, other controls provide a safety net to limit or
prevent the loss. To be effective, controls should be applied universally.
For example, if only visitors are required to wear badges, then a visitor
could look like an employee simply by removing the badge.

Public and private parties, at both national and international
levels, should act in a timely coordinated manner to prevent and to respond
to breaches of the security of information systems.
[GASSP95]

Complexity: Due to the interconnected and transborder nature of information systems and the potential for damage to
systems to occur rapidly, organizations may need to act together swiftly to meet challenges to the security of
information systems. In addition, international and many national bodies require organizations to respond in a
timely manner to requests by individuals for corrections of privacy data. This principle recognizes the need for
the public and private sectors to establish mechanisms and procedures for rapid and effective incident
reporting, handling, and response.
This principle also recognizes the need for information security principles to use current, certifiable threat and
vulnerability information when making risk decisions, and current certifiable safeguard implementation and
availability information when making risk reduction decisions.
For example, an information system may also have a requirement for rapid and effective incident reporting,
handling, and response. In an information system, this may take the form of time limits for reset and recovery
after a failure or disaster. Each component of a continuity plan, continuity of operations plans, and disaster
recovery plan should have timeliness as a criteria. These criteria should include provisions for the impact the
event (e.g., disaster) may have on resource availability and the ability to respond in a timely manner.

The security of an information system should be weighed against the rights of
users and other individuals affected by the system.
[GASSP95]

Complexity: It is important that the security of information systems is compatible with the legitimate use and flow of data
and information in the context of the host society. It is appropriate that the nature and amount of data that can
be collected is balanced by the nature and amount of data that should be collected. It is also important that the
accuracy of collected data is assured in accordance with the amount of damage that may occur due to its
corruption. For example, individuals' privacy should be protected against the power of computer matching.
Public and private information should be explicitly identified. Organization policy on monitoring information
systems should be documented to limit organizational liability, to reduce potential for abuse, and to permit
prosecution when abuse is detected. The monitoring of information and individuals should be performed within
a system of internal controls to prevent abuse.
Note: The authority for the following candidate principles has not been established by committee consensus,
nor are they derived from the OECD principles. These principles are submitted for consideration as additional
pervasive principles.

Information security forms the core of an organization's information internal
control system.
[GASSP95]

Complexity: This principle originated in the financial arena but has
universal applicability. As an internal control system, information security
organizations and safeguards should meet the standards applied to other
internal control systems. "The internal control standards define the minimum
level of quality acceptable for internal control systems in operation and
constitute the criteria against which systems are to be evaluated. These
internal control standards apply to all operations and administrative
functions but are not intended to limit or interfere with duly granted
authority related to development of legislation, rulemaking, or other
discretionary policymaking in an organization or agency.

Controls, security strategies, architectures, policies, standards, procedures, and
guidelines should be developed and implemented in anticipation of attack from
intelligent, rational, and irrational adversaries with harmful intent or harm
from negligent or accidental actions.
[GASSP95]

Complexity: Natural hazards may strike all susceptible assets. Adversaries will threaten systems according to their own
objectives. Information security professionals, by anticipating the objectives of potential adversaries and
defending against those objectives, will be more successful in preserving the integrity of information. It is also
the basis for the practice of assuming that any system or interface that is not controlled is assumed to have
been compromised.

Information security professionals should identify their organization's needs
for continuity of operations and should prepare the organization and its
information systems accordingly.
[GASSP95]

Complexity: Organizations' needs for continuity may reflect legal, regulatory, or financial obligations of the organization,
organizational goodwill, or obligations to customers, board of directors, and owners. Understanding the
organization's continuity requirements will guide information security professionals in developing the
information security response to business interruption or disaster. The objectives(4) of this principle are to
ensure the continued operation of the organization, to minimize recovery time in response to business
interruption or disaster, and to fulfill relevant requirements.
The continuity principle may be applied in three basic concepts: organizational recovery, continuity of
operations, and end user contingent operations. Organizational recovery is invoked whenever a primary
operation site is no longer capable of sustaining operations. Continuity of operations is invoked when
operations can continue at the primary site but must respond to less than desirable circumstances (such as
resource limitations, environmental hazards, or hardware or software failures). End user contingent
operations are invoked in both organizational recovery and continuity of operations.

nformation security professionals should favor small and simple safeguards
over large and complex safeguards.
[GASSP95]

Complexity: Simple safeguards can be thoroughly understood and tested. Vulnerabilities can be more easily detected.
Small, simple safeguards are easier to protect than large, complex ones. It is easier to gain user acceptance of
a small, simple safeguard than a large, complex safeguard.

Processing at different levels of security is done at different
periods of time with a color change operation used to remove any
information used during one period before the next period of processing
begins.

Complexity: It is rather difficult to thoroughly and with certainty
eliminate all cross talk between processing on the same hardware at different
periods of time.

Alarms are used to indicate detected intrusions. Examples
include automated software-based intrusion detection systems, alarm systems
based on physical sensors reporting to central control rooms, and systems in
which each sensor alarms on detection.

Complexity: The technology for detection is quite broad and there are many complexity issues that have
not been thoroughly researched.
[Cohen96-4]

Location is often related to risks. Examples include selection
of neighborhood and its impact on crime, selection of geographical location
and its relation to natural disasters, and selection of countries and its
relation to political unrest.

Complexity: Statistical data is available on physical locations and can
often be used to analyze risks with probabilistic risk assessment
techniques.

Devices used to limit the pass through of information or
physical items. Examples include air filters to limit smoke damage and
noxious fumes, packet filters to limit unauthorized and malicious
information packets, and noise filters to reduce the impact of external
noise.

Complexity: Filtering can be quite complex depending on what has to be
filtered.