In this paper, we describe much of the previous and
present work on computer viruses. We begin with a short history and
bibliographic summary. We then describe some of the major issues that
arise in the study of computer viruses and their protection
ramifications. We describe most of the lines of research presently
under way and some of their features and failings. We introduce a
method by which certain classes of systems may be used in such a manner
as to provide limited protection from computer viruses, and by which
general purpose experiments in new protection mechanisms may be
explored. Finally, we point out some of the social issues implied by
viruses and the ramifications of our present social policies on the
integrity of information residing in information systems.

1 - Introduction

A "computer virus" [1] is a program that can "infect"
other programs by modifying them to include a possibly evolved version
of itself. With this infection property, a virus can spread to the
transitive closure of information flow, corrupting the integrity of
information as it spreads [1]. Given the widespread use of sharing in
current computer systems, the threat of a virus causing widespread
integrity corruption is significant [2,3].

A considerable amount of early work was done on
protection policies to prevent from the illicit dissemination of
information [4,5], and several systems have been implemented to provide
protection from this sort of attack [6-9], but little work was done on
protecting the integrity of information except in very limited domains
[10,11]. With the advent of computer viruses [1], this situation has
changed dramatically, with over 20 journal and conference papers
appearing in the last 3 years [12-32], and several conferences on
viruses held every year.

It is critical to understand that viruses are not
programs that exploit errors or omissions in the implementation of
operating systems. They are in every sense of the word, normal user
programs, using only the normal sorts of operations that every user of
computers uses every day. It has been proven that in any system that
allows information to be shared, interpreted, and retransmitted once
received, a virus can spread throughout the system [13]. The only hope
for perfect prevention lies in limiting these functions, but these
functions are exactly the reason that computers are so very useful in
the world today. We cannot reasonably expect that these functions will
be abandoned on any broad scale, and thus we must resign ourselves to
the fact that the situation is and always will be one of survival of the
fittest.

Earlier work on related topics included the Xerox worm
program [33] which demonstrated the ability to propagate through a
network. The worm accidentally caused denial of services in an isolated
instance, apparently due to a bit error. The game of "core wars" [34]
was invented to allow programs to "do battle" with each other. Other
variations on this theme have been reported by many unpublished authors.
The term virus has also been used in conjunction with an augmentation to
APL in which the author places a call at the beginning of each user
defined APL function to invoke a preprocessor which augments the default
APL interpreter [35]. The major difference between earlier efforts and
our research is that a computer virus performs "infection". With this
infection property, many major issues arise, most notably:

Viruses spread transitively, and thus have a
tremendous range relative to other attacks. A typical PC based virus
spreads to hundreds of unconnected computers in a matter of weeks. In
more well connected networks, transitive information flow has reached
thousands of computers in a matter of hours.

Viruses persist indefinitely, and are stored on backup
tapes and other media. They can be accidentally or purposely revived
many years after they appear to have been eradicated. Attacks with long
latency before damage or very subtle damage tend to spread further.
Backups provide safe harbor for viruses rather than sound defenses
against them.

Viruses tend to spread very rapidly. As in biological
viruses, the more the infection spreads, the more copies there are
available for spreading it. A typical time to attain all privileges of
all users on a timesharing system is 30 minutes. A typical network
attack takes only hours to spread throughout the network. In one
example, an IBM PC-network was completely taken over in 30 seconds.

Viruses can act as carriers for any other information,
and thus can be used to cause arbitrary effects. They can carry other
attacks along with them, and thus bypass many of the protection
mechanisms that would otherwise be in place against these other attacks.
Thus a virus can be used to introduce a covert channel, modify
operational controls, or do any other damage.

Since the level of indirection between initial
infection and any given instance of a virus is so great, it may be very
difficult to trace an infection to its initial source. Thus the chance
of getting caught with a viral attack is substantially less than for
most other attacks.

Most of the problems related to detection and
eradication of infection mechanisms are undecidable. Evolving viruses
are easily written and are often harder to detect and eradicate than
simple viruses.

Infection can be performed at any level at which
information can be interpreted, and can infect other levels. Protecting
executable programs alone, will not prevent viruses, nor will protecting
source programs, or intermediate files. The integrity of information
must be protected at every level, and no comprehensive integrity theory
yet exists.

As an analogy to a computer virus, consider a
biological disease that is 100% infectious, spreads whenever animals
communicate, kills all infected animals instantly at a given moment, and
has no detectable side effect until that moment. If a delay of even one
week were used between the introduction of the disease and its effect,
it would be very likely to leave only a few people in remote villages
alive, and would certainly wipe out the vast majority of modern society.
If a computer virus of this type spread throughout the computers of the
world, it would likely stop most computer usage for a significant period
of time, and wreak havoc on modern government, financial, business, and
academic institutions.

The potential threat of a widespread protection
problem has been examined [36], and the potential damage to government,
financial, business, and academic institutions is extreme. In addition,
these institutions tend to use ad-hoc protection mechanisms in response
to specific threats rather than sound theoretical techniques [37].
Older military protection systems depend to a large degree on
isolationism, but newer systems allow 'multilevel' usage, in which users
of various levels of trustworthiness simultaneously coexist and
communicate within a single system [38]. In most of these systems, the
user with the lowest security clearance is the greatest threat from the
standpoint of viral attack, because higher security level users can run
programs written at the lowest security level, and thus get infected.

2 - Experimental and Real World Attacks

Several experimental and real world viral attacks have
been carried out. We feel compelled to report those which have been
substantiated by multiple trustworthy sources or which have been
experimentally verified in the course of our research.

Viral Experiments:

To demonstrate the feasibility of viral attack and the
degree to which it is a threat, we performed several experiments. In
each case, experiments were performed with the knowledge and consent of
systems administrators. Implementation flaws were meticulously avoided
because it was critical that these experiments not be based on
implementation lapses, but rather on the fundamental nature of computer
use.

Experiments have been performed on a variety of
systems to demonstrate feasibility and determine the ease of
implementing a virus on many systems. The following table summarizes
the results of some representative experiments to date. The systems and
languages are across the horizontal axis (Unix in C, Bell-LaPadula,
Instrumentation, Unix shell, VMS command language, Basic, IBM PCnet, and
PC under the DOS command language). The vertical axis indicates
measures of performance (time to write the program, infection time,
number of lines of code, number of experiments performed, minimum time
to takeover, average time to takeover, and maximum time to takeover)
where time to takeover indicates that all rights would be granted to the
attacker within that delay from introducing the virus. N/A entries in
the table indicate cases where results are either not applicable or not
available. Most of these experiments have been confirmed by many
researchers world wide.

Many attacks have also been reported in the "real
world". Many attacks are well documented, but the vast majority of
those found to date are very simplistic and perform damage in a very
obvious and rapid fashion. Perhaps the two most damaging viruses that
have been detected to date involved a use delay between infection and
damage.

One of those was apparently started at Lehigh
University in the fall of 1987. In this PC based virus, an infected
operating system disk causes itself to be replicated into the operating
system of other disks it comes in contact with. After 4 replications,
it destroys the file system. This virus infected several hundred
computers and caused hundreds of systems to lose all of the data on
their disks before it was detected. Several hundred more were infected
before a defense was designed. Consider now the ramifications of a
trivial change to this virus which waits till 40 infections before
performing damage. Certainly tens of thousands of computers worldwide
would become damaged. We are not certain, nor can we ever be certain,
that this virus isn't being stored on a backup disk somewhere by an
uninformed user. It can of course reappear again and again.

Another very damaging virus was launched against the
Amiga computer. In this case, the virus managed to get into the
legitimate distribution system of Commodore, and was spread in programs
sold legitimately by the company. Many businesses have had their
records and programs destroyed by this virus, which they paid for along
with a legitimate piece of software. Commodore apparently gets one or
more reports per week of this virus, and apparently has no defense
against it at this time.

On the very widely used "Compuserve" network, a virus
was apparently planted to infect the initialization files of the Apple
MacIntosh. This virus was designed to put an advertisement on the
screen on a particular date and then delete itself. It was noticed by a
programmer browsing through their system initialization files, and the
perpetrator was kicked off of Compuserve "for ever". Compuserve has
countered by providing a public domain program that constantly runs in
the background checking for modifications to system init files, and asks
the user if these are desired. This defense takes computer time, disk
space, and in the end doesn't really prevent viruses from corrupting
other files. It also introduces substantial overhead for the user who
has to answer technical questions about operational procedures in order
to be able to use the system, and may interfere with other programs that
are already operational on the system.

Similar attacks have been reported at the University
of Maryland, the University of Pittsburgh, the Hebrew University, and
several other universities. In light of the history of nondisclosure
from the private sector, we can expect that these university attacks are
reminiscent of the situation in the industrial community.

We don't want to leave the impression that only the
weak and unprotected users of personal computers have been attacked in
this fashion. In fact, there is no question that several large computer
companies have been successfully attacked, and that viruses have been
spread throughout their timesharing systems, even where the most
stringent protection is provided. The experimental demonstrations above
showed that viruses spread throughout systems with Bell-LaPadula based
protection where the user with the least privilege is the most dangerous
from the standpoint of viral attack. RACF, ACF2, MULTICS, and most of
the large list of other mainframe based protection systems are all
extremely vulnerable.

In Germany, the employees of a company were about to
go on strike, and one of them decided to leave a virus in the mainframe
computer. As the strike pressed on, the computer slowed to a halt.
Backups were used to restore the system, and it again ground to a halt.
In the end, several weeks worth of work had to be abandoned in order to
get back to a state where the system operated. Again, a virus of this
sort could be made to delay for months or years, and cause catastrophic
losses which backup tapes would not be able to prevent.

The Christmas chain letter that rifled through
thousands of IBM mainframes in their worldwide network several months
ago wass apparently not a virus. It replicated, but appears not to have
infected any other programs along its way. Clearly it could have
performed infection, and if it would have done so, it would have created
worldwide disaster for the worlds largest computer company. In fact,
these systems do not have any protection from the same sort of attack
happening again, nor do they prevent the use of infection in such an
attack.

Several of the major computer companies have reported
successful attacks to the media, and according to less reliable sources,
many of the computer system in silicon valley have been infected by
pranksters and more serious attackers. In one case, some experimenters
created an evolutionary virus which escaped into both Hewlett-Packard's
and Apple Computer's internal systems. Experiments have demonstrated
viruses on DEC's and Unisys's computers.

What we cannot fail to see that every major US
computer manufacturer has sustained attacks, and that most of them have
publicly stated that these attacks have succeeded and that there is no
known defense at this time.

3 - Current Best Lines of Defense

Prevention:

In general, we can limit information flow so as to
form a POset of communicating information domains in a network or system
[18,19,21,22]. In such a system, we can guarantee that a virus will
spread only to those domains which are in the transitive flow path from
its initial source [18]. The POset policy has advantages over the
Bell-LaPadula [4] policy, the Biba [11] policy, and the lattice policy
[5], in that it is more general than these, and encompasses properties
of information networks as well as single processors. We have
demonstrated methods by which collusions of multiple attackers can be
limited in their damage [18], administrative methods by which such a
network can be properly managed [18], methods for implementing
distributed administration and management of such systems [19], a
physical device for implementing these systems with provable
implementation of policies [22], and have implemented an actual system
which has these properties [21].

In a system with unlimited information paths, limited
transitivity may have an effect if users don't use all available paths,
but since there is always a direct path between any two users, there is
always the possibility of infection. As an example, in a system with
transitivity limited to a distance of 1 it is 'safe' to share
information with any user you 'trust' without having to worry about
whether that user has incorrectly trusted another user [1]. We have
generalized this principle to arbitrary subsets of users and to
arbitrary sequences of user actions [13]. In general, this problem
becomes as complex as precise maintenance of information flow which has
been proven NP-complete [5].

Although isolationism and limited transitivity offer
solutions to the infection problem, they are not ideal in the sense that
widespread sharing is generally considered a valuable tool in computing.
Of these policies, only isolationism can be precisely implemented in
practice because tracing exact information flow requires NP-complete
time, and maintaining records of the flow requires large amounts of
space [5]. This leaves us with imprecise techniques. The problem with
imprecise techniques is that they tend to move systems towards
isolationism because they use conservative estimates in order to prevent
potential damage. When information has been unjustly deemed unreadable
by a given user, the system becomes less usable for that user. This is
a form of denial of services in that access to information that should
be accessible is denied. Such a system always tends to make itself less
and less usable for sharing until it either becomes completely
isolationist or reaches a stability point where all estimates are
precise. If such a stability point existed, we would have a precise
system for that stability point. Since we know that any precise
stability point besides isolationism requires the solution to an
NP-complete problem, we know that any non NP-complete solution must tend
towards isolationism [1,13].

Detection of Viruses:

We have shown that detection of a virus is undecidable
[1,13], as is the detection of evolutions of viruses from known viruses.
A very interesting result of the undecidability issue is that, in an
environment where general purpose evolutionary mechanisms are used for
the generation of both viruses and viral detection and correction
systems, a situation analogous to the biological situation of survival
of the fittest could result.

Biological Models:

A biological analogy to computer viruses seems most
appropriate in this case. The concept of following the biological lines
of prevention, detection, and cure has been pursued to a small degree
[13,30], but no significant progress has yet been made towards a
systematic defense under this model.

Cure of Viruses:

Cure of viruses has been examined in some depth, and
appears to present little hope in the general sense. The undecidability
issues make the detection of a virus very hard. In addition, the tail
chasing problem indicates that the cure of viruses can only be performed
while services are denied [1]. This is similar to illness in biological
systems. Finally, viruses stored on backups are rarely available during
the cure's application, so reinfection after a cure is quite likely. In
effect, the backup provides a media for the hibernation of the virus.

Vaccination:

As an alternative to system wide defensive techniques
against viruses, the survival of the fittest analogy provides an
interesting means for self defense by individual programs. Rather than
have every programmer design an independent self defense mechanism, we
can imagine that a compiler might generate defenses automatically for
programs compiled with it. A typical self defense involves detection
and correction of internal errors [40]. The problems with this sort of
defense are many and varied, and in general it does not prevent viral
attack. The advantage of such a system is that it is easily
implementable in a very short period of time, and that it makes the
complexity of writing an undetected and uncorrected virus very high
[23,27].

A recent result shows that we can systematically cause
the complexity of undetected attack against programs and their data
files to be made very large by the use of an evolutionary cryptographic
checksumming technique [13,23,27]. We have prototyped a technique for
providing protection in systems with no built-in protection mechanisms,
and have successfully demonstrated its use in detecting viral attack.
Source code for a software self defense program is included in the
appendices of this paper.

Proper Defaults:

Default protection mechanisms on most systems are
poorly maintained by systems administrators. By simply setting the
default protection to files so that other users cannot execute them,
viruses are prevented from spreading between users except through
programs that are specifically enabled for shared use. One important
reason this is helpful is that it limits the ways in which viruses can
spread, and makes the tracking of their progress and potential
determination of their source much easier.

The use of user defined access controls can also aide
this protection to a significant degree. Rather than limiting the
allowable access to a large group of people, individual permission for
other users to use programs reduces the potential spread. Although this
does not prevent viral attack, it makes implementation of viruses less
trivial.

User Notification and Awareness:

User awareness is a critical aspect of protection. If
users are aware of the threats against them, they may be better prepared
to protect themselves. As an example, every time a file is modified,
the operating system could notify the user of the change. This would
make the user more aware of strange side effects of programs, and will
allow the detection of many attacks by the user being attacked.

Perhaps the simplest modification to the operating
system to this end is to output a message to the user's terminal every
time a file is opened. This does not take a great deal of time or
effort to implement and doesn't hinder typical operation, but it does
provide the user with the ability to detect programs violating their
perceived operating space. Requiring approval for writing to certain
classes of files augments the notification process while decreasing
convenience and increasing user awareness. The confinement of programs
to limited authorization spaces has been shown infeasible because of
covert channels [10], and precise tracking, as we pointed out earlier,
is NP-complete [5].

Instrumentation:

Two types of system instrumentation were considered
for detecting and tracing viral attacks. Although no technique will
catch all viruses, many techniques may catch most viruses. The 'flow
list' method whereby every user who might have had an effect on a file
whether indirectly or directly is kept in an 'incoming flow list', and
the statistical behavior method whereby the behavior of normal operation
is observed and compared to the behavior under viral attacks to find
good measures with which to detect viruses. Each was shown to be an
improvement over systems with no detection mechanisms, but they are far
from ideal. Flow lists tend to become large rapidly, and are
NP-complete to precisely maintain. Statistical techniques tend to
identify non-viruses, and if the nature of the technique is known, can
always be avoided [1,13].

Programs that examine other programs' structure and
behavior have been under investigation for quite some time for detecting
cheating in computer science classes, and extensions of these principles
are being explored for viral detection. These techniques appear to be
very limited in their application, and offer little assurance. Even
more importantly, such techniques may lead to a false sense of security.

Software Fault Tolerance:

The problem of computer viruses can be thought of as a
reliability problem in which we have 'N' system users, 'M' of which are
reliable. The problem we are faced with is that of getting reliable
answers from a system with only 'M of N' reliable components. We have
performed mathematical analysis of the effects of collusions on the
spread of integrity corruption [17,18,19,21], and are presently
considering issues related to redundant POset networks. Research in
N-version programming has shown that it is very difficult to get
multiple versions of the same program to operate with the same
behavioral characteristics [41], and that even two correctly written
programs may give different results if there specification is not
sufficiently complete [42].

When we run a multiversion program we specify N
executables, all written by different authors. The OS automatically
invokes the N programs in parallel, and performs voting on all inputs
and outputs [43-45]. The user is notified of disagreements, and the
answer that is held by the majority of the programs (assuming there is
one) is considered correct. This result is also fed into the other
programs so that they can proceed on the basis of this result for
further processing. If there is no majority vote, the program is
aborted as having too severe an error for continuation.

Two problems with the use of fault tolerant software
are that it requires considerably more effort to program, and it
requires more hardware for the same system performance. The requirement
of identical I/O behavior presents several problems. Two different
methods of performing a calculation may yield very close, but differing
results. The decision to take the average or disregard all results may
have severe ramifications. The program as a whole can proceed no faster
than the slowest of the algorithms used to compute the result, and
faulty state information may propagate throughout a program, thus giving
it the appearance of being untrustworthy when a single error caused the
whole problem. More reliability can be gained by increasing N, but if
performance is to remain the same, this requires a factor of (slightly
more than) N increase in hardware and software costs.

4 - Future Defensive Strategies

Harrison, Ruzzo, and Ullman described the general
principle of protection in information systems in 1976 [46], and showed
that even some very simple sets of rights made it undecidable to
determine the safety of a protection system. Lampson subsequently
demonstrated that the confinement problem, wherein we attempt to prevent
leakage of information by a supplier of an information service, is not
likely to be solvable [10]. As a result of this early work, most
further examination of these topics was considered fruitless.

The introduction of the computer virus problem [1,13],
introduced the principle that a dramatic change in the methods by which
we protect information must be considered. We used the "Universal
Protection Machine" [13] to prove that information flows to the
transitive closure of information flow, and that the "read" and "write"
rights that we currently depend on so heavily, lead inexorably to a
conflict between secrecy and integrity. If the integrity of information
and the benefits of sharing, general purpose operation, and transitivity
of information flow are to be maintained, we must abandon the premise of
preventing information leakage as the primary function of protection
systems, and embrace integrity protection as a primary goal.

Details of Previous Results:

A "protection system" is defined [46] as:

{S,O,R,P:(SxO)=>R,C},
where S = {s1,...,si}, the set of subjects
O = {o1,...,ok}, the set of objects
R = {r1,...,rl}, the set of rights
and P:(SxO)=>R is a mapping from (S,O) pairs to right
C = {c1,...,cm}, the set of commands

The most common rights associated with this model are
"ownership", "read", "write", "execute", "append", "create", and
"delete", but in general, any set of rights can be used to afford the
desired functioning of the protection system, and those rights can
correspond to any sequence of operations that can be performed by the
system. Hence, we can include rights like "encrypt", "decrypt",
"authenticate", "backup", and any number of other rights that we are
able to practically implement.

The "commands" are a set of transforms on P which
allow the rights to change with time. Typical transforms include
"add-right", "delete-right", "add-subject", "add-object",
"delete-subject", and "delete-object". With even these simple
transforms, it is undecidable whether or not a given subject can
eventually attain a given right to a given object [46].

As Lampson noticed [10], we may allow the provider of
a service access to information in order to process it, but if we allow
any state information to remain subsequent to the provision of that
service, (e.g. the bill for that service), that information can be used
to encode some aspects of the data input to that service. Therefore, we
cannot effectively confine that information to the client of the
service. This makes secrecy hard to maintain, but it does not in any
way effect the integrity of the original data, nor does it prevent the
assurance by the client that the service is properly provided.

The basic premise of [10] was that of protecting the
secrecy of information, both from the standpoint of the provider and of
the client of the information service. The provider of a service need
not have general access to the client's information, but rather only to
the information required for the performance of that service. More to
the point, information requiring processing can be sent to the service
for that processing, and the results sent back to the client after the
service is completed. The client need not grant authority to the
provider to act for the client in any way other than the performance of
the requested service. This leads to the "data flow" concept which has
been examined in some depth, but not in the context of a protection
mechanism.

Some Observations:

Since the results returned to the client can be
examined by the client at will before subsequent reuse, we may
effectively limit the corruption possible by the provider until such
time as their results are reused. We can also prevent other data
belonging to the client from being affected by the direct action of the
provider.

We can further improve integrity by making programs
used to provide services available for inspection by the client of the
service. This provides the client the opportunity to verify that the
provided service contains no undesirable code. This doesn't provide
proof of the integrity of the service, but it restricts the successful
attack to the most clever of providers. Furthermore, the use of sets of
standards by which we judge provided programs allows us to prevent the
use of inobvious or tricky programming practices which may hide corrupt
services.

Although this mechanism is not sufficient to prevent
viral attack in the general sense, it may provide a substantial degree
of protection to those that take the time to use it with proper care.
In addition, this mechanism provides a means for experimenting with the
most general purpose sort of protection in order to increase our
knowledge and understanding of the theory and practice of protection
policy and implementation.

We are left with two major problems. The first is the
provision of a mechanism by which we can transmit a problem to a
provider, allow the service to be provided, and get the returned results
back without granting the provider modification rights to information
under the control of the client. The second, and much more difficult
problem, is the demand for access to the source programs for all
services provided by providers, which is often a highly guarded result
of the provider's effort.

The Unix 'setuid' Facility and the 'access' Protection
Program:

The method by which we attain the first goal is
provided by the Unix 'setuid' facility [47]. This facility allows the
provider of a program to grant the client the provider's rights during
the execution of that program. Although many consider this to be a
severe protection deficiency because it is difficult to use with proper
caution, it is a vital feature in that it provides the underlying
mechanism for allowing a server to provide a service without threatening
the integrity of information owned by the client.

The basic "setuid" facility allows the granting of
rights, but does not facilitate its use by users. For this reason, we
developed the "access" program which provides a convenient method by
which users may use the "setuid" facility to provide the desired
results. This program was first demonstrated in 1982 as a method for
permitting users to grant generic access rights to other users on a user
by user basis. It has since been augmented to provide the mechanism
described herein.

The access program operates by taking as arguments,
the user ID of the provider, and the name the provider has specified for
the service of interest. It then looks in a table of triples provided
by the provider to determine if the request is authorized to the client,
and how the request is to be serviced. The access table for each user
consists of the name of the service as known to the client, the user ID
of the client who has authority to use the requested service under the
requested name (* for any user), and a set of commands to be carried out
under the authority of the provider in order to provide that service.

In this example, if the user "fc" requests service
"test", access will be granted to list the files stored in directory
"/usr/jones/fcstuf". If the user "john" makes the same request, a
welcome greeting is given. When any other user makes the same request,
the file "inputdata" is processes by the program "program", and the
results are returned to the client as output from the access program.
When the user "john" requests the service "oops", the same result is
given as if "fc" requested "test". Finally, the command:

echo "list" | access help

where is the provider with the access file
above, produces an output listing of all lines in "/usr/lib/helpfiles"
that contain the word "list". This demonstrates the general method by
which input and output may be derived through the use of the access
program. An access control file may of course request services of other
providers, etc.

The nature of the service provided is general purpose,
and rights can be granted to any subject on an individual basis to
anything that the provider has rights to. We thus have a general
purpose mechanism for granting rights in an information system which
fits the Harrison, Ruzzo, and Ullman model of a general purpose
protection mechanism [46] and can be modeled on the UPM [13] to
determine effects on information in the system being protected. The
results of theoretical work based on these models can be experimentally
verified using the "access" program under Unix as a laboratory, and
results of this work can be directly translated into operation. We have
provided a source code listing for the "access" program in the
appendices.

The Social Problem with Integrity Maintenance:

In order to assure integrity in such a system, we
should limit the ability to modify services to their owner, and provide
read access to each user with the capability to use each service. The
first condition prevents arbitrary corruption of the integrity of
services, while the second condition provides a method by which the
client of a service may verify the integrity of that service
independently. Unfortunately, there is a social problem that has
greatly hindered the attainment of the second condition. The problem is
that providers of services often do not provide access to the source
code of their services.

Recent changes in US copyright law, have made the
copyright of software without a public record of the source code
permissible. Thus, the provider's rights to intellectual property are
protected, but the society does not benefit from the advances of its
individuals. The basic premise of intellectual property rights is that
in exchange for providing the public with access to intellectual
property, rights to profit from intellectual efforts are guaranteed.
The law as it has evolved provides the right to intellectual property
without giving the society the benefits of access to results. Thus, we
are stifling progress in the name of protecting intellectual rights, but
by any historical standard, the intellectual right is that of
recognition of authorship and reasonable profit from work, not exclusion
of access to methods and guaranteed profits.

There is another social reason for the continual
eroding of the integrity of information residing in information systems,
that being the exclusion of software from the "implied warranty of sale"
which is associated with all other products in today's market. The fact
that most purchased software explicitly waives any warranty of any kind,
implies that the author refuses to take responsibility for the contents.
When combined with the special copyright provisions granted to software,
this provides a situation in which the provider of a service need not
grant access to assure the integrity of the provided service, need not
assure the consumer of the suitability of that service, need not reveal
the techniques used in that service to others wishing to augment it, and
need not take any responsibility for any ill effects of that service.
This is a sure recipe for integrity corruption.

5 - Summary, Conclusions, and Further Work

Viral attacks appear to be easy to develop in a very
short time, can be designed to leave few if any traces in most current
systems, are effective against modern security policies for multilevel
usage, and require only minimal expertise to implement. Their potential
threat is severe, and they can spread very quickly through a computer
system or network. They thus present a widespread and immediate threat
to current systems and networks.

The proper use of the techniques outlined in this
paper offers some immediate protection from simple viruses, and likely
can be used to slightly reduce the risk. The more advanced forms of
protection such as software self defense, N version programming, and the
like require time to implement, test, and disseminate, and immediate
efforts in this direction may be vital to the survival of computer use
as we know it today. The chilling effect already encountered by this
researcher indicates only the tip of the iceberg. As fear of viruses
grow, we will likely see a rapid movement towards isolationist systems.
A dramatic reduction in the creativity of programmers is likely to
result from this isolationist attitude, and we may see a return to the
"dark ages" of computing where everybody is on their own, and
cooperation wanes.

A wide variety of defensive techniques have been
considered, and many of them present vast new lines of research.
Research in all of the areas listed above is underway, but most
researchers in the US are sadly lagging behind the rest of the world.
This is primarily due to the efforts of the US intelligence community to
stifle open research in this area, and a too conservative attitude by
other funding agencies in the US. If there is any hope of finding
adequate defense against viruses, it lies in further scientific
research.

We have provided an easy to use and general purpose
method by which users using Unix can experiment with more general
purpose protection mechanisms than those normally associated with
information systems, and have in the process provided a means by which
viral spread may be severely limited. We have provided sources for this
protection mechanism so that its integrity may be easily verified by the
users, and so that it can be used as a part of high integrity systems
during the period that will be necessary in order to provide more
advanced protection mechanisms.

We conclude that in depth examination of alternative
protection rights is an important and line of research to pursue, and
that the "access" program provides a means by which this research may be
rapidly and effectively be performed. A concerted research effort in
this area is already underway in the classified domain, but this
provides little or no protection for the business, financial, and
academic communities which support the government with their efforts.
It is critical that we support research in the unclassified domain if we
are to protect our technological infrastructure.

Although the technical aspects of viral defense have
some hope of prevailing, the social problems of integrity protection
seem much more difficult to resolve. The present situation is a recipe
for integrity corruption and chilling of progress that has already taken
a severe toll on the United States. It will continue to take its toll
until integrity is viewed as the major requirement in information
systems. It is only by making integrity a social priority, that we can
hope to eradicate the corruptions that are running rampant through our
modern information systems.

References:

[1] F. Cohen, "Computer Viruses - Theory and
Experiments", DOD/NBS 7th Conference on Computer Security, originally
appearing in IFIP-sec 84, also appearing in "Computers and Security",
V6(1987), pp22-35 and other publications in several languages