Addressing Potential Risks of Artificial Intelligence Usage

By: Intelligent Automation Week

Recently, McKinsey & Co. published an article that confronted the potential risks of using and developing Artificial Intelligence on an organisational and societal level. The promise of AI has been met with an image of efficient businesses, access to new life-changing capabilities and a new sense of ease. McKinsey & Co. report that 80 per cent of organisations that are currently leveraging AI have already seen at least moderate results. Despite the excitement behind AI, there are a number of concerns that should be addressed by anyone working with its capabilities. According to the article, the most visible risks include; privacy violations, discrimination, accidents, and the manipulation of political systems. which should all be addressed with brisk caution. More importantly, however, there are challenges that are still unknown to all and can potentially have significant consequences. Allowing these risks to occur could lead to a number of problems, including; tightened regulations, reputational damage, significant loss of money and even criminal investigations.

But as an organisation starting out with Artificial Intelligence, how do you avoid these risks and their possible consequences? According to McKinsey & Co. organisations should build a strategy that would allow them to consistently recognise risks as well as engaging and educating the workforce to enable them to exercise some form of responsibility. Organisations are currently relying on hindsight in order to shape their reactions to AI risks but if the tool is to be as successful as is claimed, they need to shift their perspective to foresight. They should address and mitigate all potential risks that could occur, even if it is only a slight possibility. McKinsey & Co. have outlined fine common risks with Artificial Intelligence and have laid out why organisations should acknowledge them as well as how they can mitigate some of the consequences.

Technology:

Artificial Intelligence relies significantly on the seamless functioning of technology. However, we all know issues with technology can arise. While this may seem like a small risk, there can be severe consequences depending on where the technology fails and where you have your Artificial Intelligence capabilities centred. Troubles can potentially compromise an enterprise. For example, McKinsey & Co. reference a major financial institution who ran into trouble after their compliance software did not alert to trading issues due to how the data was fed into the system. Not only will this have led to revenue loss, it may have also contributed to significant reputational damage with their customers.

Interaction:

Having access to Artificial Intelligence capabilities in day-to-day work is a very new phenomenon. Many people who come into close contact with the tool may not know how best to interact with it and this could cause a number of problems. In manual roles, accidents are a high possibility if a worker does not know how to override a system. But this is also a high possibility in many other roles, human judgement could contribute negatively when it comes to overriding systems. Misjudgements in development and data training can all contribute to privacy and security issues. McKinsey & Co. highlight that the above are only unintended consequences. Without safeguarding in place, human intervention can use AI in a way that will cause serious consequences.

Data:

There are a number of complexities surrounding data in the modern day workforce. According to the article, discovering, cleaning, linking and using data has become significantly more difficult as the amount of unstructured data available increases at an exponential rate. Where one piece of information may be redacted in an AI system, this piece of information may be elsewhere. For anyone starting their Artificial Intelligence journey they should ensure that they stay in line with privacy regulations such as the European Union's General Data Protection Regulation (GDPR).

Security:

Issues with security are emerging at a high rate. There is huge potential for fraudsters and hackers to exploit 'non-sensitive' data and using it outside the allowance of regulations. McKinsey & Co. report that individuals may be able to create false identities or manipulate someone's data. While this is not the fault of the company, they may still face significant backlash and serious consequences, such as criminal charges.

Models:

There is a high possibility that AI models can deliver 'biased results, become unstable, or yield conclusions for which there is no actionable recourse for those affected by its decision.' Biased decisions are likely to be unintentional due to the nature of the data and the specific information that the tool has access to or has been trained to use. 'Consider, for example, the potential for AI models to discriminate unintentionally,' explains McKinsey & Co. 'against protected classes and other groups by weaving together data to create targeted offerings.' While AI can be used for good, its intelligence is always accessible for those who want to use it for bad.

As is stated in the article, there is still a lot to learn about Artificial Intelligence and its place in business, and in society. There are a number of risks that many organisations are failing to address and a number of risks that have yet to be discovered. In order to stay ahead of the game it is important to consider all scenarios and risks that you may come across. Engage your business in conversations about the responsibilities that come with using the tools, and always prepare for any situation that may arise. Organisations that nurture their relationship with the tool and use it in a way that contributes effectively will most definitely reap the benefits of Artificial Intelligence.

Interested in learning more about the risks of Artificial Intelligence and what your business needs to consider on this journey? Intelligent Automation Week 2019, the biggest IA event in Europe will be taking place in London in November and will discuss issues surrounding the future of AI. Find out more about the event here!

We use cookies and similar technologies to recognize your visits and preferences, as well as to measure the effectiveness of campaigns and analyze traffic.To learn more about cookies, including how to disable them, view our Cookie Policy

Unsupported Browser Detected

The browser you are using is not supported that will prevent you from accessing certain features of the website. We want you to have the best possible experience. For this you'll need to use a supported browser and upgrade to the latest version.