Artificial Intelligence Has Some Explaining to Do

Explaining Artificial Intelligence- Artificial intelligence software can recognize faces, translate between Mandarin and Swahili, and beat the world’s best human players at such games as Go, chess, and poker. What it can’t always do is explain itself.

AI is software that can learn from data or experiences to make predictions. A computer programmer specifies the data from which the software should learn and writes a set of instructions, known as an algorithm, about how the software should do that—but doesn’t dictate exactly what it should learn. This is what gives AI much of its power: It can discover connections in the data that would be more complicated or nuanced than a human would find. But this complexity also means that the reason the software reaches any particular conclusion is often largely opaque, even to its own creators.

For software makers hoping to sell AI systems, this lack of clarity can be bad for business. It’s hard for humans to trust a system they can’t understand—and without trust, organizations won’t pony up big bucks for AI software. This is especially true in fields such as health care, finance, and law enforcement, where the consequences of a bad recommendation are more substantial than, say, that time Netflix thought you might enjoy watching The Hangover Part III.

Regulation is also driving companies to ask for more explainable AI. In the U.S., insurance laws require that companies be able to explain why they denied someone coverage or charged them a higher premium than their neighbor. In Europe, the General Data Protection Regulation that took effect in May gives EU citizens a “right to a human review” of any algorithmic decision affecting them. If the bank rejects your loan application, it can’t just tell you the computer said no—a bank employee has to be able to review the process the machine used to reject the loan application or conduct a separate analysis.

David Kenny, who until earlier this month was International Business Machines Corp.’s senior vice president for cognitive services, says that when IBM surveyed 5,000 businesses about using AI, 82 percent said they wanted to do so, but two-thirds of those companies said they were reluctant to proceed, with a lack of explainability ranking as the largest roadblock to acceptance. Fully 60 percent of executives now express concern that AI’s inner workings are too opaque, up from 29 percent in 2016. “They are saying, ‘If I am going to make an important decision around underwriting risk or food safety, I need much more explainability,’ ” says Kenny, who is now chief executive officer of Nielsen Holdings Plc.

In response, software vendors and IT systems integrators have started touting their ability to give customers insights into how AI programs think. At the Conference on Neural Information Processing Systems in Montreal in early December, IBM’s booth trumpeted its cloud-based artificial intelligence software as offering “explainability.” IBM’s software can tell a customer the three to five factors that an algorithm weighted most heavily in making a decision. It can track the lineage of data, telling users where bits of information being used by the algorithm came from. That can be important for detecting bias, Kenny says. IBM also offers tools that will help businesses eliminate data fields that can be discriminatory—such as race—and other data points that may be closely correlated with those factors, such as postal codes.

Quantum Black, a consulting firm that helps companies design systems to analyze data, promoted its work on creating explainable AI at the conference, and there were numerous academic presentations on how to make algorithms more explainable. Accenture Plc has started marketing “fairness tools,” which can help companies detect and correct bias in their AI algorithms, as have rivals Deloitte LLC and KPMG LLC. Google, part of Alphabet Inc., has begun offering ways for those using its machine learning algorithms to better understand their decision-making processes. In June, Microsoft Corp. acquired Bonsai, a California startup that was promising to build explainable AI. Kyndi, an AI startup from San Mateo, Calif., has even trademarked the term “Explainable AI” to help sell its machine learning software.

Read More Here

Article Credit: Bloomberg

The post Artificial Intelligence Has Some Explaining to Do appeared first on erpinnews.