Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

We all see the great potential AI is bringing us. But is it really bringing it to everyone? How are we ensuring under-represented groups are included and vulnerable people are protected? What to do when our technology is unintended biased and discriminating against certain groups. And what if the data and AI is correct, but the by-effect of it is that some groups are put at risk? All questions we need to think about when we are advancing technology for the benefit of humanity.

Sharing what I've learned from my work in diversity, digital and from following great minds in this field such as Joanna Bryson, Virginia Dignum, Rumman Chowdhury, Juriaan van Diggelen, Valerie Frissen, Catelijne Muller, and many more.

In my day job I steer between needs and wants (business, users) and those who make it (happen) For over 10 years co-founder and board member of WPP - a foundation dedicated to improving the lives of LGBTI people in workplaces all over the world.

http://callingbullshit.org/case_studies/case_study_criminal_machine_learning.html 1800 photos: 1100 of these were photos of non-criminals scraped from a variety on sources on the World Wide Web using a web spider. (e.g. Linkedin) 700 of the photos were pictures of criminals, provided by police departments.

4.
But sometimes
along the line
things don’t quite
turn out right (yet)

5.
Some examples of where it did not go quite right….
Sales of helmet in
Asia (of US brand)
were low…
Turns out the
average head size
in the database was
not diverse enough
Image source: Che-Wei Wang – mindful algorithms: the new role of the designer in generative design - TNW conference 2018

6.
Some examples of where it did not go quite right….
Automatic translations are gender biased

7.
Some examples of where it did not go quite right….
Google’s image
classifier wrongly
assigned black people

40.
✅ Key questions when developing or deploying an algorithmic system
• Who will be affected?
• What are the decisions/optimisation criteria?
• How are these criteria justified?
• Are these justifications acceptable in the context where the system is
used?
• How are we training our algorithm?
• Does training data resemble the context of use?
IEEE P7003 Algorithmic Bias Considerations by Ansgar Koene
IEEE standard Algorithmic bias https://standards.ieee.org/project/7003.html

44.
In summary
We are all biased, so lets be aware lost of laws, regulations and other
sources for guidance
And lets be critical about the data
we use
Lets design, test and implement with
humans in the loop