Our SogetiLabs expert Rik Marselis talks about making AI more trustworthy by making it explainable. Read more.

KONTAKT

Rik Marselis

Quality and Testing Consultant | Netherlands
+31 886 606 600

Rik Marselis

Quality and Testing Consultant | Netherlands
+31 886 606 600

The amount of software systems that are using artificial intelligence (AI) and in particular machine learning (ML) is increasing. AI algorithms outperform people in more and more areas, causing risk avoidance and reducing costs. Despite the many successful AI applications, AI is not yet flawless. In 2016, Microsoft introduced a Twitter bot called Tay. It took less than 24 hours before Tay generated racist tweets because unadulterated sent ultra-right examples to the Twitter bot, on which end Microsoft decided to take the bot offline. This example is not yet in a critical domain, but AI algorithms are being implemented in increasingly risky environments. Applications in healthcare, transport, and law have major consequences when something goes wrong. In addition, the more AI applications, the more influence this has on people’s daily lives. Examples of critical implementations are a self-driving car that should make a decision in a complex traffic situation or a cancer diagnosis that has to be made.