The Ethics of AI in User Experience

Date

3.15.19

Author

Karen Neicy

Category

OGKulture

When considering the ethical implications of AI systems, it’s easy for the mind to wander to an apocalyptic scene where robots have taken over the world. But what if the outcome is actually worse? What happens when algorithmic transparency ceases to exist, and rather than present decisions based on data, user behavior is manipulated, negatively affected and ultimately controlled to the users detriment? While advances in AI have a long way to go before reaching this type of widespread sophistication, we’ve seen multiple cases of AI gone wrong in relation to user experience, and without large scale regulatory bodies, the future looks bleak. When designing AI systems, we must ask whether or not those designing them are mal-intended, and safely assume there’s no reason believe that even on a small scale, they aren’t. This type of realistic pessimism could prevent egregious outcomes for society.

AI systems can already automate multiple parts of the user experience – from the data that’s aggregated to the creative that’s served in an ad. For now, this data seemingly makes a users life easier, and streamlines production. However, in the future, the uses of this data, as AI advances, will have no limit to the ways in which it may be able to manipulate user behavior. What would this look like? Let’s say a user recently searched for divorce or bankruptcy advice, was in debt, and had a severe gambling problem. On the last leg of their savings, advanced targeting methods controlled by AI ensure the user would make risky purchase decisions: Vegas vacations, adult website subscriptions and high stakes online poker games. While the ethical considerations aren’t as pointed in this example as, say – the fact that self driving cars are programmed to kill you, the implications are still far reaching. AI systems will eventually use data to manipulate users in a way that contributes to the degradation of society, the effects of which will make Kim Kardashian’s cultural black hole of influence seem microscopic.

But I don’t gamble, you’re thinking. And I’ve never even been married. I’m just an innocuous internet user with nothing interesting to offer the robots. Wrong. Enter the even more frightening hellscape of scenario #2:

You pride yourself on being a music aficionado. You started 3 indie bands by the time you were 18, and your carefully curated Instagram following is a collection of the finest in local art, culture, film and music. Based on every online interaction you have, you’re finding what you agree to be only the BEST and most obscure in new music, and sharing and thereby creating the culture to which you exist in – both online and in real life. In the background, record labels have developed proprietary algorithms to not only determine what you want to listen to, but to produce the music itself while slapping a human face on the entire operation. Just ask Amper Music, or artists like Poppy, whose work examines the implications of artificially manipulated popular culture. Manipulating user behavior in order to influence culture already happens on a large scale, but with AI, systems could eventually be what’s creating the culture at large.

So is this all just one meaningless doomsday apocalypse? Probably, but the upside is that there’s hope that regulatory bodies and ethical frameworks will catch up to advances in tech. So far, they haven’t. With more than 2000 AI startups in the US alone, very little time and money is being spent on regulating AI. In a recent survey of 1400 business executives, only 32% considered AI one of the top ethical concerns facing the world. Companies like Equivant, who faced significant public scrutiny when using an AI system with inherent racial bias, should have board level, executive decisions made before deploying their systems that center solely on ethical concerns.

AI that isn’t inclusive has the largest and most far reaching ethical impact on user experience. We’ve already seen it happen multiple times, and when systems are created by inherently bias humans, it is the single most unavoidable ethical implication of AI within user experience. When teaching systems what they are to learn to perform, there are two key functions – underfitting, and overfitting. Finding the balance in between is what makes models successful. The problem lies in the fact that if the data used to train the model is biased, the model will produce seemingly “appropriate-fitting”, or acceptable results despite its unavoidable bias.

A classic example of bias, outright racist machine learning happened two years ago when Google Photos implemented an AI labeling process. Through this process, two black men were automatically identified as gorillas. While the cause of the error lied in the training set, it was developed by someone whose sets of data were too narrow and exclusive. Google isn’t the only one who’s made egregious user experience mistakes when deploying AI systems. It only took Microsoft 24 hours to completely indoctrinate a robot into becoming a sexist misogynist. These examples beg the larger, original question – what happens when people use AI systems to intentionally create negative outcomes? Without proper regulation the implications of AI as it relates to controlling user experience outcomes will be beyond anything that can be properly managed or contained. Without having honest conversations about the insular nature of Silicon Valley, algorithmic transparency and machine learning in general, we are setting ourselves up for a disastrous outcome, culturally and socially. Until then, don’t forget to be nice to Alexa. She may end up controlling your destiny.