Description

Advances in AI are happening at a tremendous pace, and especially Machine Learning systems are hastily deployed in settings that span as wide as social media, search engines, advertising, military use, and legal systems. At first sight results are often promising, but more and more "outliers" are becoming visible as well. Systemic bias in culture, language, or business practices often become intensified: From the Tay-chatbot turning racist, to both Google and Flickr classifying images of African-Americans as "monkeys", to discrimination and sexism encoded in language models used for everything from translation, to court sentencing decision. Luckily, initiatives that address some of these concerns is happening through more interpretable models, new privacy regulations, novel privacy-preserving learning algorithms, and researchers and engineers standing up for more just and transparent machine learning models and methods.

The panel aims to debate these themes with speakers, and the audience, and go beyond the usual "AI hype" and discuss both the amazing progress, as well as the setbacks, in making societies better, more humane, and ready for the future!

Speaking at the panel will be:

Roelof Pieters, Panel Host, Co-founder at creative.ai

Marek Rosa, CEO/CTO at GoodAI

Françoise Provencher, Data Team Lead at Shopify

Hendrik Heuer, Researcher at Institute for Information Management (ifib) at the University of Bremen