FROM PAYMENTS SOURCE - AI alerts for fraud. Why not prevent it?

Given the relentless pressure from cybercriminals, the security process has evolved from simple passwords to multi-factor challenges as the good guys tried to stay a step ahead. Beyond a negatively impacted customer experience, these added security layers have largely just been iterations on the old centrally controlled challenge/response authentication model focused on ‘who’ is at the other end of the line.

Unfortunately, simple authentication is only as good as the secrets we maintain and the security of the channels we transact through. Cybercriminals have proven to be very adept at stealing secrets through social engineering and hacking central data stores, and at intercepting and repurposing transactions across several transmission channels. Cybercrime losses therefore continue to rise.
Enter another layer of security to counter the growing threat, backend monitoring of parties and transactions. These security platforms have evolved too, from random sampling to simple automated alerts based on risk factors to AI-based systems tracking user behaviors, device fingerprints and much more. Regardless of complexity, they are all looking for telltale signs of data leaks and fraud.

Also, regardless of complexity, monitoring platforms experience a high degree of false positive alerts on potentially fraudulent transactions. That’s because the alternative, allowing actual fraudulent transactions to pass through undetected, is a bigger concern. The recent survey of membership by the Mid-Size Bank Coalition of America (MBCA) found that on average only 8.9% of monitoring alerts warranted investigation and only 2.8% remained suspicious after investigation.

Whether the false positive rate is 90% or 50%, it is clear monitoring creates new forms of risk and friction as businesses train the models and investigate/clear the alerts. Of note, while monitoring may be performed using very sophisticated models, alert investigations are often performed with legacy procedures. An investigative phone call to an account owner’s number on record may well be answered by the criminals who engineered the transaction and have access to the associated account information.

As such, AI-driven monitoring is an important tool to fight crime but it is not a total solution, and as bad actors develop their own AI capabilities, the battle will continue.
If you could start from scratch, what requirements would you put in place for a cost-effective transaction security platform you could completely trust?

For starters, you would give counterparties (both customers and suppliers) the flexibility to originate their request when and where they want. You would structure the message in an easy to use format with strong tamper-proof security, regardless of which distribution channel is used. You would ensure the message confirms both their identity and details of the transaction they are requesting to eliminate identity theft and man-in-the-middle attacks. The platform would allow you to quickly and easily validate that the counterparty’s message is authentic. The platform would be fully functional whether it is operating as a stand alone service or integrated into a larger system.

Further, the decentralized platform would not hold any counterparty personally identifiable information (PII) which might create new attack surfaces, and in fact allows you to confirm authorization without holding centralized stores of PII. Finally, it would be easily deployed and highly scalable.

Ideally, a cloud-based transaction security platform would give users control to generate smart PINs on their mobile devices. The codes would provide definitive confirmation of the user’s identity and transaction details; businesses receiving a valid code could execute the request with confidence.

This "flipping the model" scenario is but one of the ways a financial institution can go beyond AI and ensure the integrity of digital transactions. AI is and always will be an important part of any enterprise fraud prevention program. But be careful to not overly rely upon it. AI is a double-edged sword.