CTO Corner: Augmented Intelligence in Financial Services

August 24, 2017

Highlights

Artificial Intelligence (AI) has advanced significantly and is being embedded into many important products. Despite past hype cycles, experts believe this time AI is likely to greatly impact how business is conducted. To succeed AI systems must remain trustworthy and compliant with company policy and government law and regulation, even when under attack, keeping sensitive personal information secured and private. Augmented intelligence (systems effectively partner AI with humans to enhance human capabilities rather than replace them) hold the potential for satisfactorily addressing these concerns.

The Opportunity

Financial systems with augmented intelligence have the potential to be a game changer, allowing financial services firms to create more personalized service environment, resulting in more satisfied customers, while reducing operational cost and risk, and enhancing privacy.

Take-away

It is important for the industry to address the privacy, security, legal and regulatory challenges these new systems present. It is also important that workers overcome resistance to change and master the skills needed to effectively partner with these augmented intelligence systems.

In my April 2015 CTO Corner I talked about the application of Artificial Intelligence (AI) and machine learning in Financial Services. Since that time the technology increased dramatically in use. This CTO Corner discusses the risks and business issues AI presents and introduces the concept of augmented intelligence, in which machines assist humans in knowledge tasks, as a business enhancer and a way to combat security and fraud risks.

Background: Work began on AI at the dawn of computing. Despite the disillusionment resulting from a series of hype cycles, AI continues to be refined and improved. The explosion of data coming from social networks and networked sensors has further propelled resources devoted to AI as a tool for the timely analysis of this data deluge, and stimulated investment funding. In 2016, equity funding in AI start-ups reached an all-time high of $2.5 billion.

Democratization of AI: One of the reasons AI use has accelerated is due to its democratization through:

Wider familiarity and comfort with the technology as AI gets integrated into standard products coming out of Google, Microsoft and Amazon, among others.

This makes it easier to build and try out AI applications, lowering the barrier to entry. Another factor touted as likely to accelerate AI is the emergence of specialized hardware making available dramatically increased AI processing power at affordable costs. However, when specialized AI hardware, then LISP machines, was produced in the 1980’s, demand died as general computing power dramatically increased, and coincidently AI fell into a “winter.”

Still not general intelligence: Although still characterized as weak AI (intelligence focused on a narrow task) rather than strong AI (skillful and flexible as humans in solving unfamiliar problems), advances in deep learning (algorithms that employ layers of nonlinear processing units such as neural networks) has proven especially useful in speech recognition, natural language processing, vision detection, machine translation, data mining, and fraud detection. As the complement of applications grow, combined weak AI applications will appear to exhibit more general intelligence. However, they will remain well short of strong AI. We are not likely to see all humans out of a job, although sufficiently repetitive tasks utilizing machine readable data are at risk.

Impact of AI on Financial Services: The financial services industry has already seen the emergence of AI applications such as robo-advisors, intelligent fraud detection monitors, and stronger identity verification. The industry can provide even greater value if these applications are delivered as augmented intelligence solutions where AI and humans partner to provide greater trust and personalized solutions to their customers’ unique needs, without their feeling marketed to. They can be expected to provide value in the following areas:

Enhancing human decision-making through better prediction and learning. Humans have well-documented built-in biases and limitations which can be overcome and augmented by machine intelligence able to analyze in real-time the very large data sets now available.

Improving the effectiveness and efficiency of business processes and operations through “smart” automation. AI can reduce cost while increasing the effectiveness of the full spectrum of operational processes, from fraud detection, to customer modeling and capital planning.

Privacy Concerns: The growing presence of AI in our lives raises questions over how information is protected and used. As machines learn more about user’s personal information, likes, dislikes, and behavior they represent an attractive target that if compromised could produce disastrous results. Appropriate safeguards must be implemented to minimize these risks.

Conflicts and ethical challenges: AI introduces a broad range of ethical challenges. Users might not mind that their credit card keeps information about their purchases, but how will they feel marketed products based on this data? Or an insurance premium increases because an AI program monitoring the user’s driving history, concludes they present a greater risk than previously assumed. These concerns are best summed up by the following joke:

Hello! I’d like the usual pizza.

May I suggest this time ricotta, arugula with dry tomato rather than sausage with cheese?

No, I hate vegetables.

But your cholesterol is not good.

How do you know?

From your blood test results for the last 7 years.

But I do not want this pizza, I already take medicine.

You have not taken medicine regularly. It’s been 4 months since you purchased a 30-tablet box.

I bought more from another drugstore.

It’s not showing on your credit card.

I paid in cash.

Your bank statement does not show you withdrawing that much cash.

Enough! I’m going to an Island without internet and no one to spy on me.

I understand sir, but you need to renew your passport as it has expired.

Although the joke illustrates an AI agent gone horribly wrong, AI can add real value, help to retain existing, and attract new customers if used appropriately. For example, it can ask customers if they are willing to give permission to use their data to:

Find ways to reduce insurance costs (e.g. lower rates for safe driving, billing insurance for only the actual time on the road, recommend safe driving routes).

Find ways to improve the terms of a loan (e.g. suggest ways to secure the loan with existing assets, or ways to improve one’s credit rating).

Security concerns: As industry becomes increasingly dependent on machines automatically making high level decisions it becomes vulnerable to AI programs getting compromised, misinformed, spoofed, misled, or corrupted. People will become accustomed to trusting the machine and ignoring the warning signs. Monitoring AI programs is a similar problem to monitoring human insiders against fraud and misconduct, only things will occur faster, and less transparently.

How can we monitor AI programs that operate faster and analyze greater volumes of data?

Some companies send employees on mandatory leave to ensure they aren’t committing fraud, is there an AI-equivalent?

How can a company ensure that AI programs are complying with policy and law?

How can one design AI programs that are secure from tampering?

Can AI technology be used to ensure the trustworthiness of AI systems?

Can independent programs and processes be developed to verify AI programs are trustworthy?

Can we implement through technology the equivalent of the two-man rule?

Regulatory concerns: The industry needs to understand how AI implementations fit in the current regulations environment including, current U.S. federal and State consumer protection and privacy laws, Europe’s EU Directive 95/36/EC, GDPR, and likely future regulations.

As AI programs become more widespread and privacy and security concerns grow and incidents occur, how are regulations likely to change?

What are the risks the enterprise is still responsible for?

What new risks will it be accepting?

How does a company ensure it is complying with policies and laws when an AI program is making the decision?

Finding ways to easing concerns: It is important to become attuned to these issues, and to start experimenting with remedies and solutions to address them. Would it ease concerns if financial institutions provided users with a privacy dashboard (such as provided by Google and Microsoft) that showed what they know about you, who they are sharing this information with, and how it could be turned off or on? Are there other proactive measures that can be taken? Augmented intelligence may be the answer. While AI can address problems that are reasonably well-defined and narrow in scope, humans excel at defining and solving complex problems requiring the flexibility to adapt and make changes to successfully address these concerns

Acquiring needed skills: With AI-automated assistance, people will be able to spend more time on creative work: focusing on the 20% of non-routine tasks that drive 80% of the insight and value, however will they? When I automated processes in Navy fusion centers, people found it difficult to give up performing the routine tasks to concentrate on the non-routine issues, to work differently. Can today’s workers be motivated to work on the more challenging areas? Can they overcome the instinct to resist change, and acquire and use the necessary new skill sets? Can your company reap the potential benefit of augmented intelligence, not just job replacement? Can the cultural barriers be overcome? In next 5 years, our industry could reach a point where robots do the easy things, while humans help meet the governance challenges and deal with more non-routine dynamic issues, but can our workers cross the chasm, or will the enterprise risk creating an automated environment that is mechanistic and rigid?

A case for operational AI applications: Although front office applications such as financial robo-advisors and intelligent assistants are exciting, operational back office applications can be an important first step, providing necessary background and experience to tackle more sensitive customer-facing functions. They can be fielded in a more controlled environment and show more immediate and measurable productivity improvements. They are lower profile, but can have a big impact as they move toward mainstream. Examples include intelligent fraud detection, lending scoring, spotting non-standard behavior patterns when auditing financial transactions, sifting through and analyzing thousands of pages of tax changes. They can also include modeling how customers might react to various scenarios, testing assumptions on users’ digital twins, model scenarios for capital planning, or use natural language and graph processing to flag transactions for compliance reviews.

Conclusion: Augmented Intelligence (effectively partnering AI with humans) can help AI systems realize their full potential, and overcome privacy, security and regulatory concerns, provided workers can master the necessary skills and overcome resistance to change.