Addressing issues of conduct and operational risk created by digital technologies

“Without appropriate oversight, controls and change governance, technologies like RPA, machine learning and AI could become a source of reputational and operational risk.”

When we think of digital technologies, the risks envisioned alongside them are often those posed to human jobs, and not necessarily those posed to organizations and their customers. A 2016 article in the The Guardian cited a research report noting that, by 2021, “.. AI/ cognitive technology will displace jobs, with the biggest impact felt in transportation, logistics, customer service and consumer services.”1 While numerous reports highlight the risks posed by these technologies to human workers, they rarely cover the other risks posed by implementing these new tools.

Technologies such as robotic process automation (RPA) can automate manual processes, and machine learning algorithms can help us better harness and use data to make intelligent decisions. Previously manual processes, such as calculating payments to customers or determining whether a customer will be approved for a loan, can now be automated, bringing significant savings—but also unique risks.

Martin Wheatley, former chief executive of the UK’s Financial Conduct Authority, outlined a vision in which “self-improving artificial intelligence” could help mitigate the risks posed to customers by mis-advice or human error.2 Though these technologies can mitigate a range of risks, organizations should also consider the risks posed by digital technologies themselves. Without appropriate oversight, controls and change governance, technologies like RPA, machine learning and artificial intelligence (AI) could become a source of reputational and operational risk.

For an example of new technologies creating operational risk, consider the recent case of an AI chatbot being taught by Twitter Inc. users to disseminate offensive messages, which led to reputational damage and embarrassment for the chabot owner.3 This chatbot example garnered a lot of public attention, but its impact was low when compared to the potential damage an RPA solution or a machine learning/AI algorithm could cause.

Imagine, for example, a “robo adviser” algorithm that recommends suitable investments to a customer based on their risk appettite. Although an algorithmic solution can be built to specified requirements fitting most customers, there is always a risk that certain customers’ scenarios are not treated correctly and so a customer might be mis-sold in error by the algorithm.

If, as we’ve seen, new technology can bring risks beyond those posed to human jobs, how can financial firms address and resolve those risks?

Managing digital technology risk

Digital technologies like RPA, machine learning and AI can deliver benefits that reduce both operational risks and costs within Financial Services. These technologies require oversight, control and change governance to help mitigate potential regulatory and operational risks.

Below is a road map for mitigating the risks associated with digital technologies such as RPA and machine learning:

Actively monitor and manage digital technologies such as robotic process automation, and use clever data solutions to manage risk.

Reduce the need for human intervention by delegating some quality assurance tasks to RPA bots, with humans used for more complex checks.

Handle data appropriately, regulatory compliance can be evidenced and any potential customer impacts can be sized.

Properly overseeing the risks associated with RPA, machine learning, and AI helps financial services firms increase potential benefits when they implement and maintain these technologies. On the other hand, the risks related to a less stringent approach might be costly; they could impact customers or lead to regulatory violations, particularly given the changes coming as part of General Data Protection Regulation (GDPR).