Automation, AI, and the future of audit and compliance

One of my favorite parts of the novel Jurassic Park is when the techs are describing why they are so sure that the dinosaurs aren’t reproducing in the park. They have given them all amino acid therapies that essentially render them all female. Yeah, says one skeptic, but how do you know? And that’s when the tech crew reveals a tracking system that shows that the number of dinosaurs in the park is remaining flat. They had 40 velociraptors last month, and they have 40 this month. Hence, no reproduction. But wait, says the skeptic. Your system is only looking for 40 velociraptors. Once it finds 40, it stops counting. What happens when you take that limit off? The number skyrockets and suddenly, the park realizes they have a huge problem on their hands because they have been putting a lot of faith into a very dumb system.

I think about that scene quite a lot when we discuss artificial intelligence in compliance, because until fairly recently, the options we had for automating compliance functions and testing were too dumb to depend upon. But that’s not so any more, and as I took part in the many discussions around what content would comprise the Compliance 2017 conference, currently underway in Washington, D.C., one of the themes that arose right away was technology in general, and artificial intelligence in particular.

At a session entitled The Future of Audit—presented by Kevin Lane and T’Shaka Lee of Deloitte & Touche, and Forrest Deegan of Abercrombie & Fitch—a fascinating discussion over incorporating automated reasoning and decision making into audit and compliance shed some light on where the technology is, who is using it, and what its future deployment is likely to be. As the presenters polled the audience, we saw that nearly one third of the people in the room were using some kind of automated reasoning and testing in their compliance programs. But, that meant that two-thirds of them were not. This is pretty interesting, considering how T’Shaka Lee shared an anecdote about how a week-long auditing process could be reduced to just a few hours with a program that could be developed in about two days.

The biggest challenge seems to come from figuring out how to obtain and leverage automated testing tools. Of the attendees, just over half of them said their biggest challenge was familiarity with the technology; another quarter of them said they were flummoxed with understanding how to demonstrate the technology’s effectiveness. The key there is that CEOs and COOs are looking for ways to better decision-making with shorter lead times. Automating that process helps there, because it can also help to identify risks around the corner. That’s a hard line to turn down.

One of the biggest challenges that remains, however, is sheer protectionism on the part of the compliance teams most likely to be automated. There is a certain level of “subconscious resistance” that can inhibit compliance professionals from championing automation, but it doesn’t have to be the case, necessarily. Compliance staff whose jobs have been automated can be redeployed elsewhere in the organization to something more strategic and more rewarding. The key here, though, is being able to learn additional skills to make that transition a successful one. The hard truth is not everybody can easily re-skill themselves.

The remaining poll questions of the session told an interesting story. Attendees were broadly interested in the specific aspects of automation, such as analytics, cognition and repeated processes automation, but most were interested in all of those things. More than half of the attendees were from organizations that were currently using analytics in testing—no surprise there, as it’s the easiest element of automation to incorporate into operations. But only a third of attendees were using cognition in testing, which speaks to how AI remains, for most, something people are interested in but only just beginning to make use of. Case in point: eight in 10 attendees said they planned to expand their use of automation in the coming year.

The widespread sentiment what that no matter what their level of engagement was between compliance and audit, nearly everyone wanted to increase that relationship, but the big challenge to achieving that was a lack of resources. This isn’t all that different from a lot of initiatives we see in compliance: an understanding of the opportunity, a lack of engagement on actually taking the opportunity, and a lack of resources being the reason for it. Now that we’re seeing the same thing repeat itself on the artificial intelligence front, we could be excused for thinking that this is just the natural order of things in the compliance world. Only it shouldn’t be.

Automation isn’t just a matter of convenience. It’s a matter of survival: when the DOJ is looking for a particular piece of information and you can’t find it because your system is too old and antiquated, automation suddenly doesn’t seem like such a luxury any more. When you need to scan all outgoing communications for potentially non-compliant language, knowing that an intelligent filter can do the job with exponentially better results than humans, automating isn’t a matter of preference, it’s a matter of risk management.

Most compliance officers know this, but for those who don’t, the time to understand is now. Even with a crackerjack cognitive AI system, it takes time to get it up and running because really good cognitive solutions learn from connectivity. You have to feed them enough data for the system to correlate and learn from. The earlier you adopt, the earlier you see the return.