Corporate pre-crime: The ethics of using AI to identify future insider threats

20 August 2018:

To protect corporate networks against malware, data exfiltration and other threats, security departments have systems in place to monitor email traffic, URLs and employee behaviors. With artificial intelligence (AI) and machine learning, this data can also be used to make predictions. Is an employee planning to steal data? To defraud the company? To engage in insider trading? To sexually harass another employee?

As AI gets better, companies will need to make ethical decisions about how they use this new ability to monitor employees, particularly around what behaviors to watch out for and what interventions are appropriate. Information security teams will be on the front lines.

In fact, some types of predictions about employee behaviors are already possible. "The reality is that it's really easy to determine if someone is going to leave their job before they announce it," says one top information security professional at a Fortune 500 company, who did not want to be named. "I started doing it ten years ago, and it's actually highly reliable."

For example, an employee about to leave the company will send more emails with attachments to their personal address than usual, he says. This is important for security teams to keep an eye on, since departing employees might want to take sensitive information with them when they go, and they will try to download everything early, before they tell their managers about their plans.