Ethics in Artificial Intelligence

What is Artificial Intelligence?

A major buzzword for a while now, Artificial Intelligence (AI) or machine learning more or less touches every part of our lives whether we realise it or not.

Far from a Jetson’s style robot in every home, it’s delivered via our online food shop, how we listen to music and watch TV through streaming services, and when we use social media.

Big news for all industries and sectors, in years to come Future Now says, “Some predict an upheaval as big as, or bigger, than that brought by the internet.”

Why should we worry about ethics in AI?

Although AI is exciting in terms of technological advances, what does it have to do with ethics? Well, with developments in AI coming thick and fast, just because something can be done, doesn’t mean it necessarily should be done.

With uncharted territory and a step into the unknown, Facebook got into a spot of bother in August 2017 when their AI – Alice and Bob, invented their own language and started having a chat. They were swiftly turned off.

As data breaches are becoming something of the norm these days (British Airways, Ticketmaster and Yahoo! all had well documented breaches in 2018), the capability of AI is open to misuse. This can be through poor decision making for instance, by discrimination, bias or malicious intent such as weaponry and cyberconflict.

Big businesses too are following suit with Google and IBM taking the lead in producing manifestos, while also putting systems in place to police their algorithms. But does this mean we can all breathe easily?

“Trust is a relationship between peers in which the trusting party, while not knowing for certain what the trusted party will do, believes any promises being made. AI is a set of system development techniques that allow machines to compute actions or knowledge from a set of data. Only other software development techniques can be peers with AI, and since these do not “trust”, no one actually can trust AI.”

Unlike a human, AI doesn’t have accountability, Bryson goes on to say, “when a system using AI causes damage, we need to know we can hold the human beings behind that system to account.” Therefore, businesses need to turn their own ethics into practice to ensure accountability.

What does this really mean?

For AI to be consistent with your company’s ethics, you firstly need to understand the principles underpinning your innovation and strategies.

Through the creation of best practice and benchmarking of what is or isn’t acceptable, everyone involved should have a clear idea of what should, and conversely shouldn’t, be undertaken for their customer.

Being transparent at all times, customers should for instance, be communicated to when an algorithm is deployed or changed – especially if it has significant impact on them as an individual. Which makes you think more about one of the previous points raised about Facebook…..

Trends and predictions

With analyst Gartner choosing digital ethics and privacy as a major trend for 2019, stating the key issue is the abuse of trust, ethics could be considered a pretty serious topic.

With acceptable levels of ethics being something that we all need to think about as a starting point, Microsoft recently cautioned all industries. They advised organisations develop guidelines as per their new publication ‘The Future Computed’.

Microsoft believes that AI should be deployed with six core principles – that AI should be “fair, reliable and safe, private and secure, inclusive, transparent and accountable.” Also, that AI shouldn’t “threaten equity or security.”

With the onus very much on industry to remain transparent and accountable in order to safeguard customer information…. how do you feel about AI? And what will your company be doing to police AI systems?