Monday December 05, 2016

According to this article, the biggest threat to artificial intelligence is human stupidity. Really Sherlock? Hell, the biggest threat to self-driving cars right now is cars driven by humans. Why do you think AI is going to take over the world and kill us all as soon as possible?

But perhaps more dangerous is the assumption that we treat AI as a magical, mystical source of truth. As the introduction to our special report makes clear, the output of the algorithm is only ever as good as the data put in, or the rules that humans set. The black box nature of algorithms that can learn and evolve in ways their human developers find hard to follow should not mean that their answers be accepted without question. Rather, ways must be found to make sure that AI-led decision making becomes as easy to understand -- and to challenge -- as any other type. Some researchers have set out ideas around how this can be done using factors such as responsibility, explainability, accuracy, auditability and fairness, and more work is needed here.