28 July 2007

Of late, insider attacks have gained a lot of attention.
Such attacks are quite pernicuous, not just for the damage they
can do — and they can do a lot — but also because of the
affect on morale. What should we do about the problem?
There are no simple answers.

It should be noted that insider attacks are not one phenomenon.
Rather, there are at least three different types of insider attacks.
In general, the different types of attack require different defenses.

The first type of attack involves an abuse of discretion rather than
any technical failing. That is,
the misbehaving insider performs some action he or she is authorized
to do, but for the wrong reason. To give on example, I, as a professor,
have a fair amount of discretion in submitting grade change forms for
past students. If, however, I choose to give good grades to people
who pay me well, I would be acting improperly.
This is an abuse of authority attack. (There have been recent
allegations of just this sort of attack, at
Touro College in New York and
Diablo
Valley College in California.
In the former, those charged include
the former director of admissions and the former director of the computer
center. The accused in the California case are part-time student employees.
The difference in rank is striking; employee malfeasance is not limited
to any one stratum.)

There's not a lot that can be done technically to prevent abuse of
authority attacks.
The best that can be done is to create copious log files, recording
what was done, by whom, and when. Statistical analysis can spot
who seems to be doing too many unusual or legitimate attacks.
In event of a prosecution, these logs can be correlated with, say,
bank deposit records. Of course, this means that the organization's
own logs need to be retained for long enough; failure to do that was
a problem in the
Greek cellphone tapping scandal.
Other defenses are more procedural: two-party control, audits,
and so on. (Note, of course, that technical mechanisms may be needed to
support these defenses: are log files sufficient for an audit, is there
proper support for two-party control, etc.?)

In knowledge-based attacks, the bad guy uses non-public knowledge
gained in the course of his or her employment. This may be the
identity of valuable
targets — i.e.,
which machine has the master database? — details of
how systems work; insider terminology or names of employees for use in
social engineering attacks; even passwords.

Defenses here depend on what type of knowledge is being exploited.
As always, log files help.
An intrusion detection system (IDS)
is one useful approach, though certain insiders will
know what the system is looking for, what the action thresholds are,
where honeypots are located, etc.
In some environments, compartmentalization of information can help,
though that can have a deleterious effect on productivity.

Ideally, a system is secure no matter how much an enemy knows.
This principle — that one should not rely on
security through obscurity — was set forth by
Kerckhoff in 1883 in his
La
Cryptographie Militaire:
"Il faut qu'il n'exige pas le secret, et qu'il puisse sans inconvénient
tomber entre les mains de l'ennemi." (Roughly, "the system must not
require secrecy and can fall into enemy hands without causing inconvenience.")
Keeping certain information secret is a perfectly valid first line of
defense, but it shouldn't be the only one.

The third form of insider attack is based on privileged access.
In this case, the attacker has already passed some of the defenses, such
as firewalls. The failure here can be one of two different types. First,
the outside-facing defense may be the only layer. In all but the smallest
organizations, this is wrong. There is no reason why all employees should
have all possible access privileges. In the second case, the rogue
employee can use his or her access to exploit other security failings,
such as buggy code. (I noted in 1994 that the primary purpose of a
firewall is to
keep bad
guys away from buggy code.)
Defense is depth is a good idea; the failure here of one layer shows the
benefit of the others.

Privileged access attacks are the most amenable to technical solutions.
Certainly, intrusion detection software is a useful tool here, though its
primary output — that something bad has happened — is useful
whether the attack came from inside or outside. We can also strengthen
access control rules to limit access to only those people or programs who
need it. Often, of course, the underlying system doesn't support such
fine-grained access control. For example, in one common configuration of the
PHP web scripting language, it is
impossible to secure data files used by one PHP application from
any other PHP application on the same web server — and that includes
user-written PHP scripts if your web server permits those.
This isn't much of an issue for large companies, who can use dedicated
servers for each application; it is a problem for smaller organizations.

It is important to realize that insider attacks are a real threat. The
CIA had
Aldrich
Ames and the FBI had
Robert
Hanssen. A recent
survey
(by, admittedly, a party who stands to benefit) suggests that many companies
believe they are being infiltrated by organized crime. Another
report
says that 89% of fraud (note: this is fraud in general, not computer
crime) is committed by insiders; worse yet, 60% of fraudsters are members
of senior management or board members.

It is clear that there is no one solution to insider threats.
Business process is a lot of the answer. Background checks, especially in
the national security arena, are another. Computer scientists have their
own role to play, both on their own and by supporting the business process.