Can We Make Artificial Intelligence Accountable?

I write about AI, data, deep tech & self-management in the digital age

Could IBM's software show us how AI gets to its decisions?

Daniel Gonzalez

Lack of explainability of decisions made by Artificial Intelligence (AI) programs is a major problem. This inability to understand how AI does what it does also stops it from being deployed in areas such as law, healthcare and within enterprises that handle sensitive customer data. Understanding how data is handled, and how AI has reached a certain decision, is even more important in the context of recent data protection regulation, especially GDPR, that heavily penalizes companies who cannot provide an explanation and record as to how a decision has been reached (whether by a human or computer).

IBM may have made a major step towards tackling this issue, announcing today a software service to detect bias in AI models and track the decision-making process. This service should allow companies to track AI decisions as they occur, and monitor any ‘biased’ actions to ensure that AI processes are in line with regulation and overall business objectives.

If this software can truly explain the decisions taken by even the most complex deep learning algorithms, this development could provide the peace of mind that many companies need before unleashing AI on their data.

Breaking bad decision paths

‘Explainability has been a big focus for our research’ said Jesus Mantas, Managing Partner at IBM Global Business Services, in an interview with myself earlier today, and this software has grown out of that research. By measuring IBM’s predicted decisions against the actual decisions taken by an AI program, including the weight it gives and the confidence it has on that decision, the software can theoretically figure out whether the algorithm is biased and determine the cause of that bias.

This could allow companies to prove their compliance with data protection regulations by tracking how an AI program uses its data, and make sure sensitive results are not compromised by a biased model. Companies can also set their own decision parameters to track, so that flawed decisions do not affect either business objectives or regulatory requirements.

Jesus Mantas was confident that in the case of a GDPR complaint going to court ‘[a company] will be able to provide information on when and how a decision was made, the factors influencing that decision and whether an algorithm was retrained’, effectively providing all information necessary to ensure that a decision made by AI is explainable with regards to regulatory requirements.

Whether the visualizations provided by the bias-detection software will be enough to understand and more importantly explain a complex deep learning AI algorithm remains to be seen, as it has so far been impossible to understand how AI systems reach decisions. This is therefore a big claim from IBM, as only last year Tommi Jaakkola, an MIT professor working on applications of Machine Learning, said: ‘If you had a very small neural network, you might be able to understand it, but once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.’ The problem is not only that we can’t see what deep learning algorithms are doing, but we also can’t understand their workings, so will IBM’s visualization software solve anything?

Data bias vs. model bias

Based on the cloud, the software runs simultaneously alongside the most commonly used AI programs (Watson, Tensorflow, SparkML, AWS SageMaker and AzureML), so that decisions made by AI can be checked and steered in the right direction in real-time. This way of monitoring AI differs from existing bias-detection software, as it monitors the model itself and the data running through it at runtime, allowing companies to look under the hood and guide or retrain the program, rather than running through millions of bits of data to determine a bias.

Bias in data is a serious issue, as we have seen in certain courtrooms in the US that received advice from an algorithm which falsely assumed that black people were more likely to re-offend than white people. In this case, the algorithm was trained and shaped based on biased data, which led it to make biased decisions. Jesus Mantas states, ‘if an algorithm is making decisions based on age, race, or sex, the IBM software can visualize how the algorithm is performing and correct that bias, retraining the algorithm if necessary.’

Boosting AI adoption

Alongside this software service, IBM have released what they call an ‘AI Fairness 360 Toolkit’, an open-source library of algorithms, code and tutorials to allow programmers to implement bias-detection as they are building their models. Those looking to program AI systems can then gain more insight into the reliability of their program and can feel confident that their models will stand up to full deployment.

A recent report (also from IBM) highlights that although willingness to adopt AI is very high (82% of 5,000 C-suite executives are considering it) fears around liability (63%) and a lack of technical skills (60%) are holding back many businesses from moving further forward.

Real explainability of AI decisions could give executives the confidence they need to implement AI enterprise-wide.

What does this mean?

With this software, IBM is setting the ambitious goal to increase the global spread and collaboration of AI technologies, to promote understanding of previously alchemic processes and help enterprises understand enough about AI to deploy it throughout their company.

Time will tell if this software provides as much insight as it promises, as the machinations of the deepest machine learning algorithms are incredibly difficult to interpret, but this could be a major step towards achieving accountable AI programs that can explain their actions, and could go a long way towards reassuring companies that AI is worth their trust.

Charles Towers-Clark is Group CEO of Pod Group, an IoT connectivity & billing software provider. His book ‘The WEIRD CEO’ covers AI & the future of work. Follow him @ctowersclark