So I saw this interesting tweet from Paul Haahr, a top search ranking engineer at Google for over 15 years. He cited a NY Times article named Can A.I. Be Taught to Explain Itself? and wrote "This article by @cliffkuang on Explainable AI is an excellent introduction to hard, important problems. These issues are coming up at work every day; the article describes them well and gives some reasons for optimism."

This article by @cliffkuang on Explainable AI is an excellent introduction to hard, important problems. These issues are coming up at work every day; the article describes them well and gives some reasons for optimism. https://t.co/1aRP5WB2qO

It reminded me of when he spoke at SMX some time ago where he said Google doesn't fully understand RankBrain. I believe this article references what he means by that, because RankBrain is a machine learning, AI based search feature.

If Google cannot debug or understand how the algorithm itself gets better and makes certain decisions then how can Google debug it fully when it goes bad.

"As machine learning becomes more powerful, the field’s researchers increasingly find themselves unable to account for what their algorithms know — or how they know it." That is not just a science fiction Terminator movie like fear concept but in reality, it is a topic that engineers need to battle with to understand how to improve their AI.

Anyway, reading the article might shed some light into those challenges and show you maybe how far off we are from robots killing us.