The immediate and lasting impacts of IT sabotage by insiders can be catastrophic. In one case analyzed by researchers at the CERT Insider Threat Center, a disgruntled employee sabotaged IT systems hosting a child abuse hotline, preventing access to the organization’s website. Another similar case resulted in severe limitations of 911 emergency services in four major cities. In another instance, a company went out of business because an insider deleted all research data and stole all backups.

Since 2001, researchers at the CERT Insider Threat Center have documented malicious insider activity by examining publicly available information such as media reports and court transcripts. We have also conducted interviews with the United States Secret Service, victims’ organizations, and convicted felons. The goals of our research have been to answer the following questions:

Are there patterns an employee exhibits prior to an attack? If so, what are they?

If a pattern is applied, can it distinguish malicious insiders from all others?

And if so, is there a point at which the malicious insiders could have been detected or at which their behavior could have been identified as problematic?

Previous research has been conducted on enabling and measuring early detection of insider threats, but several of the studies lacked access to the large collection of real-world cases our team has collected over the past 11 years.

As part of our research, we focused on the timeline of events in cases of IT sabotage. Specifically, we looked at the following:

the types of behavior that were observable prior to an attack

patterns or models we can abstract from the saboteurs

the points at which the employer could have taken action to prevent or deter the employee from executing an insider attack, either by positively mitigating the employee’s disgruntlement or by protecting IT systems.

Our analysis was based on more than 50 cases of insider IT sabotage (other types of insider threat behavior include fraud and theft of intellectual property). From the selected cases, we created a chronology of events for each incident. The number of events per insider incident ranged from 5 to more than 40, with an average of 15.

We began by trying to identify specific events in each case that represented key points of the incident. These key points are described as follows:

Tipping Point (TP)the first observed event at which the insider clearly became disgruntledIn one case we examined, it was reported that an “insider had a dispute with management regarding salary and compensation.”

Malicious Act (MA)the first significant observed event that clearly enabled the attackIn the case described above, it was reported that the “insider inserted a logic bomb into production code.”

Action on Insider (AI)the first instance of organizational response to the insider (fired, arrested, etc.)In our example, the insider was fired and a search warrant was executed “to find missing backup tapes at insiders residence.”

Initial Findings

After we determined which events corresponded to each key point, we analyzed each case to determine whether the events or behaviors prior to each event indicated a predisposition for sabotage. For this project, we considered predispositions to be characteristics of the individual that can contribute to the risk of behaviors leading to malicious activity. For example, one issue we examined is whether an employee—prior to the point of clear disgruntlement—demonstrated a serious mental health disorder, an addiction to drugs or alcohol, or a history of rule conflicts.

Although we are still conducting analysis, two patterns have emerged:

In general, insiders begin conducting attacks soon after reaching a tipping point of disgruntlement.

Insiders tend to exhibit behavioral indicators prior to exhibiting technical indicators. In particular, concerning behaviors of an interpersonal nature were generally observed prior to concerning behaviors on IT systems.

Addressing Challenges

One of the challenges we face in our work is measuring early detection with respect to the moment of attack. Specifically, we have found the following factors particularly troublesome:

Sabotage attacks are often complicated, and it’s hard to pinpoint specific event timing.

Defining an attack is not simple In particular, does the attack include the time spent on planning? Is planting a logic bomb considered the attack, or does the attack begin when the logic bomb executes?

Early detection times may vary according to analysis parameters, and timing fidelity is inconsistent. For some cases we have timing information down to the minute; for others, we only know the day the event occurred.

Deciding which events were observable is hard. Does this mean “capable of being observed” (some system exists to observe the behavior), or “capable of being observed in each specific case” (the organization possessed and correctly utilized the tool to detect the behavior)?

Measuring employee disgruntlement is hard. Behavior that indicates disgruntlement for one person may be normal behavior for another. Trying to identify a point (or set of points) in a case timeline where the insider clearly became disgruntled, or where the disgruntlement became markedly worse, is therefore highly subjective.

Another difficulty we have experienced is a scarcity of detailed data. While we have several hundred cases to choose from, our sources for these cases are sometimes limited to court documents, media reports, etc., which generally do not contain detailed technical or behavioral descriptions of the insiders’ actions prior to attack.

To help maintain integrity of our results and evaluation methodologies, we are collaborating with Roy Maxion from CMU’s School of Computer Science, who is well known for his research in research methodologies and data quality issues.

Caveats

One important factor to note is that our data is somewhat biased, as we only consider malicious insiders who have been convicted of a crime related to their insider activity, and we are limited in the scope of data sources used. The number of insiders who are not detected, or detected but not reported, is probably much greater than the number of insiders convicted. Moreover, our results are not generalizable to the entire scope of IT sabotage. They do, however, provide some of the best evidence available for researchers and practitioners to develop novel controls for preventing and detecting some types of sabotage – including the types of high-impact crimes that result in prosecution and conviction of the insider.

Impact of our Work and Future Plans

Through this research, we plan to equip organizations, companies, and even government agencies with improved insider threat detection capabilities. We believe that our work will be of particular relevance to programs in multiple sectors of industry and finance throughout the United States.

We also hope that our findings will establish a foundation for future research. Specifically, we are interested in leveraging our findings to develop controls, such as technical and non-technical methods of preventing and detecting insider threat.