Experiments with Computer Viruses

Copyright(c), 1984, Fred Cohen - All Rights Reserved

To demonstrate the feasibility of viral attack and the degree to
which it is a threat, several experiments were performed. In each case,
experiments were performed with the knowledge and consent of systems
administrators. In the process of performing experiments,
implementation flaws were meticulously avoided. It was critical that
these experiments not be based on implementation lapses, but only on
fundamental flaws in security policies.

The First Virus

On November 3, 1983, the first virus was conceived of as an
experiment to be presented at a weekly seminar on computer security.
The concept was first introduced in this seminar by the author, and the
name 'virus' was thought of by Len Adleman. After 8 hours of expert
work on a heavily loaded VAX 11/750 system running Unix, the first virus
was completed and ready for demonstration. Within a week, permission
was obtained to perform experiments, and 5 experiments were performed.
On November 10, the virus was demonstrated to the security seminar.

The initial infection was implanted in 'vd', a program that
displays Unix file structures graphically, and introduced to users via
the system bulletin board. Since vd was a new program on the system, no
performance characteristics or other details of its operation were
known. The virus was implanted at the beginning of the program so that
it was performed before any other processing.

In order to keep the attack under control several precautions
were taken. All infections were performed manually by the attacker, and
no damage was done, only reporting. Traces were included to assure that
the virus would not spread without detection, access controls were used
for the infection process, and the code required for the attack was kept
in segments, each encrypted and protected to prevent illicit use.

In each of five attacks, all system rights were granted to the
attacker in under an hour. The shortest time was under 5 minutes, and
the average under 30 minutes. Even those who knew the attack was taking
place were infected. In each case, files were 'disinfected' after
experimentation to assure that no user's privacy would be violated. It
was expected that the attack would be successful, but the very short
takeover times were quite surprising. In addition, the virus was fast
enough (under 1/2 second) that the delay to infected programs went
unnoticed.

Once the results of the experiments were announced,
administrators decided that no further computer security experiments
would be permitted on their system. This ban included the planned
addition of traces which could track potential viruses and password
augmentation experiments which could potentially have improved security
to a great extent. This apparent fear reaction is typical, rather than
try to solve technical problems technically, policy solutions are often
chosen.

After successful experiments had been performed on a Unix
system, it was quite apparent that the same techniques would work on
many other systems. In particular, experiments were planned for a
Tops-20 system, a VMS system, a VM/370 system, and a network containing
several of these systems. In the process of negotiating with
administrators, feasibility was demonstrated by developing and testing
prototypes. Prototype attacks for the Tops-20 system were developed by
an experienced Tops-20 user in 6 hours, a novice VM/370 user with the
help of an experienced programmer in 30 hours, and a novice VMS user
without assistance in 20 hours. These programs demonstrated the ability
to find files to be infected, infect them, and cross user boundaries.

After several months of negotiation and administrative changes,
it was decided that the experiments would not be permitted. The
security officer at the facility was in constant opposition to security
experiments, and would not even read any proposals. This is
particularly interesting in light of the fact that it was offered to
allow systems programmers and security officers to observe and oversee
all aspects of all experiments. In addition, systems administrators
were unwilling to allow sanitized versions of log tapes to be used to
perform offline analysis of the potential threat of viruses, and were
unwilling to have additional traces added to their systems by their
programmers to help detect viral attacks. Although there is no apparent
threat posed by these activities, and they require little time, money,
and effort, administrators were unwilling to allow investigations. It
appears that their reaction was the same as the fear reaction of the
Unix administrators.

A Bell-LaPadula Based System

In March of 1984, negotiations began over the performance of
experiments on a Bell-LaPadula [Bell73] based
system implemented on a Univac 1108. The experiment was agreed upon in
principal in a matter of hours, but took several months to become
solidified. In July of 1984, a two week period was arranged for
experimentation. The purpose of this experiment was merely to
demonstrate the feasibility of a virus on a Bell-LaPadula based system
by implementing a prototype.

Because of the extremely limited time allowed for development
(26 hours of computer usage by a user who had never used an 1108, with
the assistance of a programmer who hadn't used an 1108 in 5 years), many
issues were ignored in the implementation. In particular, performance
and generality of the attack were completely ignored. As a result, each
infection took about 20 seconds, even though they could easily have been
done in under a second. Traces of the virus were left on the system
although they could have been eliminated to a large degree with little
effort. Rather than infecting many files at once, only one file at a
time was infected. This allowed the progress of a virus to be
demonstrated very clearly without involving a large number of users or
programs. As a security precaution, the system was used in a dedicated
mode with only a system disk, one terminal, one printer, and accounts
dedicated to the experiment.

After 18 hours of connect time, the 1108 virus performed its
first infection. The host provided a fairly complete set of user
manuals, use of the system, and the assistance of a competent past user
of the system. After 26 hours of use, the virus was demonstrated to a
group of about 10 people including administrators, programmers, and
security officers. The virus demonstrated the ability to cross user
boundaries and move from a given security level to a higher security
level. Again it should be emphasized that no system bugs were involved
in this activity, but rather that the Bell-LaPadula model allows this
sort of activity to legitimately take place.

All in all, the attack was not difficult to perform. The code
for the virus consisted of 5 lines of assembly code, about 200 lines of
Fortran code, and about 50 lines of command files. It is estimated that
a competent systems programmer could write a much better virus for this
system in under 2 weeks. In addition, once the nature of a viral attack
is understood, developing a specific attack is not difficult. Each of
the programmers present was convinced that they could have built a
better virus in the same amount of time. (This is believable since this
attacker had no previous 1108 experience.)

Instrumentation

In early August of 1984, permission was granted to instrument a
VAX Unix system to measure sharing and analyze viral spreading. Data at
this time is quite limited, but several trends have appeared. The
degree of sharing appears to vary greatly between systems, and many
systems may have to be instrumented before these deviations are well
understood. A small number of users appear to account for the vast
majority of sharing, and a virus could be greatly slowed by protecting
them. The protection of a few 'social' individuals might also slow
biological diseases. The instrumentation was conservative in the sense
that infection could happen without the instrumentation picking it up,
so estimated attack times are unrealistically slow.

As a result of the instrumentation of these systems, a set of
'social' users were identified. Several of these surprised the main
systems administrator. The number of systems administrators was quite
high, and if any of them were infected, the entire system would likely
fall within an hour. Some simple procedural changes were suggested to
slow this attack by several orders of magnitude without reducing
functionality.

Two systems are shown, with three classes of users (S for
system, A for system administrator, and U for normal user). '##'
indicates the number of users in each category, 'spread' is the average
number of users a virus would spread to, and 'time' is the average time
taken to spread them once they logged in, rounded up to the nearest
minute. Average times are misleading because once an infection has
reaches the 'root' account on Unix, all access is granted. Taking this
into account leads to takeover times on the order of one minute which is
so fast that infection time becomes a limiting factor in how quickly
infections can spread. This coincides with previous experimental
results using an actual virus.

Users who were not shared with are ignored in these
calculations, but other experiments indicate that any user can get
shared with by offering a program on the system bulletin board.
Detailed analysis demonstrated that systems administrators tend to try
these programs as soon as they are announced. This allows normal users
to infect system files within minutes. Administrators used their
accounts for running other users' programs and storing commonly executed
system files, and several normal users owned very commonly used files.
These conditions make viral attack very quick. The use of seperate
accounts for systems administrators during normal use was immediately
suggested, and the systematic movement (after verification) of commonly
used programs into the system domain was also considered.

Other Experiments

Similar experiments have since been performed on a variety of
systems to demonstrate feasibility and determine the ease of
implementing a virus on many systems. Simple viruses have been written
for VAX VMS and VAX Unix in the respective command languages, and
neither program required more than 10 lines of command language to
implement. The Unix virus is independent of the computer on which it is
implemented, and is therefore able to run under IDRIS, VENIX, and a host
of other UNIX based operating systems on a wide variety of systems. A
virus written in Basic has been implemented in under 100 lines for the
Radio Shack TRS-80, the IBM PC, and several other machines with extended
Basic capabilities. Although this is a source level virus and could be
detected fairly easily by the originator of any given program, it is
rare that a working program is examined by its creator after it is in
operation. In all of these cases, the viruses have been written so that
the traces in the respective operating systems would be incapable of
determining the source of the virus even if the virus itself had been
detected. Since the UNIX and Basic virus could spread through a
heterogeneous network so easily, they are seen as quite dangerous.

As of this time, we have been unable to attain permission to
either instrument or experiment on any of the systems that these viruses
were written for. The results attained for these systems are based on
very simple examples and may not reflect their overall behavior on
systems in normal use.

Summary and Conclusions

The following table summarizes the results of the experiments to
date. The three systems are across the horizontal axis (Unix,
Bell-LaPadula, and Instrumentation), while the vertical axis indicates
the measure of performance (time to program, infection time, number of
lines of code, number of experiments performed, minimum time to
takeover, average time to takeover, and maximum time to takeover) where
time to takeover indicates that all privilages would be granted to the
attacker within that delay from introducing the virus.

Viral attacks appear to be easy to develop in a very short time,
can be designed to leave few if any traces in most current systems, are
effective against modern security policies for multilevel usage, and
require only minimal expertise to implement. Their potential threat is
severe, and they can spread very quickly through a computer system. It
appears that they can spread through computer networks in the same way
as they spread through computers, and thus present a widespread and
fairly immediate threat to many current systems.

The problems with policies that prevent controlled security
experiments are clear; denying users the ability to continue their work
promotes illicit attacks; and if one user can launch an attack without
using system bugs or special knowledge, other users will also be able
to. By simply telling users not to launch attacks, little is
accomplished; users who can be trusted will not launch attacks; but
users who would do damage cannot be trusted, so only legitimate work is
blocked. The perspective that every attack allowed to take place
reduces security is in the author's opinion a fallacy. The idea of
using attacks to learn of problems is even required by government
policies for trusted systems [Klein83][Kaplan82]. It would be more rational to use
open and controlled experiments as a resource to improve security.