Stay Ahead of the Curve by Using AI in Compliance

Although human oversight is required, advanced technologies built on AI will become pivotal in building safer financial markets and a safer world.

A decade since the global financial crisis of 2007–2008, the nature of risk that financial institutions face has remained at levels that continue to concern global financial institutions and financial analysts. Two things are contributing to this situation: the fact that financial firms operate in an increasingly interconnected, digital world where the rules around compliance are constantly being tested by the threat of cyberattacks; and the diverse, sometimes conflicting global data regulations and vulnerabilities associated with the open and collaborative nature of the Internet of Things.

As a result, regulators, compliance officers, and businesses are up against considerable odds to deliver compliance in a climate rife with the near-everyday possibility of cybercrimes, attacks on personal data, and challenges to the foundations of national and international financial stability.

In such an environment, the various constituents are seeking access to a set of clearly stated compliance rules that are as iterative, quick-moving, and responsive to changing circumstances as today's global financial market itself. This is moving several financial sector players to turn their attention to next-generation tech solutions to track, manage, and better prepare their institutions for the kind of unforeseeable and potentially catastrophic risks that today's interconnected and "always-on" world poses.

Built on a backbone of advanced technologies such as artificial intelligence (AI) and machine learning (ML), these solutions, with their unsurpassed capacity to reliably analyze reams of data, offer compliance teams the ability to, in real time, both quickly quarantine suspicious activity and swiftly approve safe financial transactions.

Money-laundering estimates indicate that "dirty money" accounts for 2%–5% of global GDP per annum, or up to $2 trillion of global GDP in current US dollars.

Researchers at RMIT University in Melbourne, Australia, are reported to be helping the country's financial intelligence agency — the Australian Transaction Reports and Analysis Centre (AUSTRAC) — to find and stop suspicious financial activity, including money laundering, by implementing AI/ML tools.

With black money worth about $4.5 billion said to be circulating in the Australian economy annually, AUSTRAC is reported to have been struggling in recent years to keep up with the sheer volume of transactions it needs to scour. As a result, it partnered with researchers from RMIT to set up an AI-enabled ML system to accurately identify suspicious-looking financial activity across potential nefarious transaction patterns.

Contrary to previous detection systems, the new AI-driven systems empower the financial intelligence agencies to spot suspicious patterns across millions of transactions even when they are hard to trace back to specific individuals. It does this by feeding the ML system with previously gathered data as well as insights procured from the analysis of money-laundering networks, which helps AUSTRAC substantially reduce the volume of transactions it needs to sift through.

Similarly, HSBC (along with Europe's other large banks) has been moving toward adopting AI-based software to help improve its anti-money-laundering (AML) processes in the wake of heavy fines that several financial institutions have had to pay for failure to; prevent money-laundering activities. For example, HSBC is partnering with a Silicon Valley-based AI startup, Ayasdi, to boost the efficiency of its AML investigations by replacing manual processes with automated ones. In a pilot of the startup's AI technology, HSBC saw a 20% drop in the number of false-positive financial transactions investigations (without reducing the number of cases taken forward for closer study) — a crucial win for the bank as it continues to drive adoption of next-generation technologies to lower risks while also lowering costs.

As financial compliance requirements grow in complexity in response to the threat of attacks on financial institutions, it is clear that advanced technologies such as AI and ML as well as natural language processing will continue to play a leading role in helping financial organizations better meet their regulatory obligations. With speed and accuracy being essential requirements to both maintain compliance and prevent the possibility of financial fraud/crime, these technologies uniquely qualified to help financial compliance teams fulfill their pressing daily requirements.

Although human oversight is required for the final calls that financial institutions may need to take regarding the blocking or quarantining of suspicious activities, it is clear that advanced technologies built on AI will become pivotal in the endeavor to build safer financial markets and a safer world.

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry's most knowledgeable IT security experts. Check out the Interop agenda here.

Eric Winston is the Executive Vice President, General Counsel, and Chief Ethics and Compliance Officer responsible for Mphasis' global legal and compliance function and policies. He has spent nearly 20 years guiding international market-leading public and private equity-owned ... View Full Bio

Online attackers are constantly developing new, innovative ways to break into the enterprise. This Dark Reading Tech Digest gives an in-depth look at five emerging attack trends and exploits your security team should look out for, along with helpful recommendations on how you can prevent your organization from falling victim.

The transition from DevOps to SecDevOps is combining with the move toward cloud computing to create new challenges - and new opportunities - for the information security team. Download this report, to learn about the new best practices for secure application development.

It was found that dropbear before version 2013.59 with GSSAPI leaks whether given username is valid or invalid. When an invalid username is given, the GSSAPI authentication failure was incorrectly counted towards the maximum allowed number of password attempts.

** DISPUTED ** In PCRE 8.41, after compiling, a pcretest load test PoC produces a crash overflow in the function match() in pcre_exec.c because of a self-recursive call. NOTE: third parties dispute the relevance of this report, noting that there are options that can be used to limit the amount of st...

** DISPUTED ** LibTIFF 4.0.8 has multiple memory leak vulnerabilities, which allow attackers to cause a denial of service (memory consumption), as demonstrated by tif_open.c, tif_lzw.c, and tif_aux.c. NOTE: Third parties were unable to reproduce the issue.