Capital One AI chief sees path to explainable AI

Nitzan Mekel-Bobrov, head of artificial intelligence work at card issuer Capital One Financial, disputes the notion deep learning forms of machine learning are "black boxes," and insists sensitive matters such as decisions to assign credit can be made "much more interpretable."

Lessons from Capital One's move to the cloudBernard Golden, VP of Cloud Strategy at Capital One, explains what enterprises can learn from Capital One's initiative to move its entire application portfolio to the cloud.

The so-called black box of artificial intelligence has been a topic of much debate in recent years. Can neural networks whose functioning includes "hidden layers" that defy easy explanation ever be trusted with the most sensitive tasks society might ask of them?

One practitioner offers an adamant "yes," insisting older approaches to statistics and probability are not necessarily more transparent than some of today's deep learning.

Nitzan Mekel-Bobrov a year ago joined McLean, Va.-based Capital One Financial as its artificial intelligence and machine learning chief for the card business.

At $36 billion in market capitalization, Capital One is dwarfed by retail banking competitors such as JP Morgan Chase and Bank of America, but the firm takes pride in its use of technology throughout its operations, and Mekel-Bobrov explained in an hour-long interview with ZDNet how machine learning is proliferating throughout parts of the organization.

He also offered extended commentary on some of the negative spin about deep learning, as he sees it.

One myth about AI that Mekel-Bobrov won't tolerate is the notion that today's deep learning approaches are universally too mystical, too much of a black box to be employed in sensitive financial applications.

"I don't buy into the notion it's all a black box and so you can't use it," said Mekel-Bobrov in an interview this week.

"If I take a very simple linear regression [an older form of statistical equation solving], but now I create 10,000 of these across a very complex data ecosystem, I've now created something that's very hard to understand."

Mekel-Bobrov recently returned from the prestigious NeurIPS AI conference in Montreal, where he participated in a panel discussion with other members of the financial community, including consideration of the topics of bias and fairness and explainability.

"One camp said deep learning is inherently more of a black box, and then another camp said that Random Forests [an older statistical approach] can be equally so."

But, "I'm in the camp that says it's complexity that creates opacity."

"A good deep learning approach could give us more comfort that we know what's happening in the system than having 1000 of these human-created rules, created over decades."

There was a bit of a joke, he said, at the workshop: Do we actually think that the current state of affairs in banking and credit is "extremely easy to understand for consumers?"

"I've been rejected for credit, and it's hard for me to understand why," says Mekel-Bobrov. "With machine learning, we feel we can actually raise the bar. The human decision piece is very fuzzy. You can strip that out and make things much more interpretable."

Deep learning is "only just now being applied at scale and with heterogeneity," he observes. "That's why I'm optimistic about our ability to tease apart the inner workings of the classifiers," meaning, neural networks.

"With systemization, I feel strongly we could make things more explainable and interpretable."

In banking, of course, the question of the black box is not a purely academic discussion: Capital One has more than one regulator, he observes. Although that fact is "a challenge," he says, "it's also an opportunity for us and the industry to take the lead."

Regulation itself is not a joke, he avers. "We have the incentive to take the lead here, where the tech companies don't."

"We only roll out machine learning where we feel comfortable there are no biases or lack of transparency or challenges -- both from an internal risk perspective, and from an external regulation and a customer wellbeing perspective."

About 70 percent of the time, Capital One is using machine learning as a "consumer" of the technology, says Mekel-Bobrov, making an approximation. The other 30 percent or so, "we can be ML producers," meaning, bringing innovation to the basic technology.

Although the company works with popular AI frameworks, such as Google's TensorFlow, "There's nothing we can take just as is; there's a ton of re-work and optimization that has to happen."

"For example, with natural-language processing, there are certain predictions we want to make in order to help customers reach their financial targets based on word embeddings that are -- they're very unique to our circumstances."

One area where Capital One is "really pushing the cutting edge" in the use of machine learning is in "personalization" of service, says Mekel-Bobrov. He makes an analogy to Netflix's or Amazon's recommendation engines. "We admire a lot of what they do, but they are selling a single product; we are not in the business of selling a product, we are in the business of relationships, and we need to understand the customer deeply and continuously."

That involves studying a whole plethora of "heterogenous" data points drawn from text, email, phone calls that are made directly to Capital One, and numerous such "unstructured" data. Things such as "long short-term memory" networks, or LSTM, which specialize in processing sequential data, can be helpful in tracking points of interaction over many years, he says. But that's just the beginning. "We don't know what features matter" to a neural network in the customer relationship, "We want to actively learn them. We really want to use AI to mimic the relationship that two people would have."

Must read

One area he's enthusiastic, but guarded, about, is fraud detection and prevention.

"The fraud on a swipe has materially improved based on machine-learning based frameworks," says Mekel-Bobrov. Pressed to give an explanation or a detailed description, Mekel-Bobrov demures, saying, "you'd be amazed by the sophistication of the fraudsters, reading something [in an article] and testing those defenses… Let's just say, it relies on a number of things, including the network model, the weights, the parameters, the feature engineering, but also really keeping the architecture very close to the business problem."

Asked if the bank is waiting on any particular advances in the broader field of machine learning, Mekel-Bobrov says, "The biggest one I can think of is mobile."

"There are certain things in mobile that will unleash for us newer capabilities in our app and in our mobile platforms." For example, he'd like to see an ability for real-time "scoring" of neural networks in a mobile app. Another one is "model personalization, a future where there is a [network] model for an individual."

"It's possible now, but not in a practical way, not until compute gets even better than it is now, both in terms of inference and training."

Thank You

By registering you become a member of the CBS Interactive family of sites and you have read and agree to the Terms of Use, Privacy Policy and Video Services Policy. You agree to receive updates, alerts and promotions from CBS and that CBS may share information about you with our marketing partners so that they may contact you by email or otherwise about their products or services.
You will also receive a complimentary subscription to the ZDNet's Tech Update Today and ZDNet Announcement newsletters. You may unsubscribe from these newsletters at any time.