Explaining AI Decisions

IBM’s Institue of Business Value recently issued a new report concerning the implementation of AI, and according to a survey of 5,000 executives, discovered that 60 percent of those polled said they were concerned about being able to explain how AI is using data and making decisions in order to meet regulatory and compliance standards.

According to a story in The Wall Street Journal, there’s concern that:

AI decisions can sometimes be black boxes both for the data scientist engineering them and the business executives telling their benefits. This is especially true in deep learning tools such as neural networks that are used to identify patterns in data, whose structure roughly tries to mimic the operations of the human brain.

But just as in high school geometry, the question arises as to how to demonstrate one has proved their work. That is to say, to reveal how the AI system arrived at a specific conclusion.

The Journal identifies measures IBM took last week which include cloud-based tools that can show users which factors led to an AI-based recommendations.

The tools can also analyze AI decisions in real-time to identify inherent bias and recommend data and methods to address that bias. The tools work with IBM’s AI services and those from other cloud services providers including Google, said David Kenny, senior vice president of cognitive solutions at IBM.