You are here:Home/Getting the tech balancing act right in professional services

New technology may be seen as a time saving solution to free up people’s time in professional services – but also raises plenty of questions.

Professional services firms are increasingly contemplating the use of Artificial Intelligence (AI), machine learning and robotic process automation (RPA) solutions in their everyday work.

The benefits and opportunities across the professions are huge.

A look at the use of technology in the legal profession, for example, provides some useful insights. Some types of AI application, such as due diligence software, predictive technologies that forecast litigation outcome, or analytics software enabling lawyers to use data points from past case law to identify trends and patterns, can improve productivity and performance immeasurably.

Document automation and electronic billing are also playing an increasingly important role in professional services, by freeing up fee earners from time consuming, low value activities.

Key concerns

But what about the concerns? Are AI systems going to replace professionals? The use of AI in the legal profession has already diminished the role of paralegals and researchers. It has also led to calls for lower legal fees where firms use AI instead of human resources.

And what about the interaction between professional advisers and these AI systems. At what point does subjective judgement need to be made by human beings based on the data these systems produce?

It’s worth remembering that all AI applications are developed and based on historical data or on the experience and judgements of the people building and using them.

But historical data may have a particular bias and systems can be manipulated. By basing the design of AI systems on this type of data an AI system could perpetuate such biases. How can this be prevented? Is it possible to impose an ethical framework for these machines which enables them to remove bias and/or manipulation?

Another important concern relates to professional accountability for AI. Who is going to incur liability if something goes wrong with an AI system? Is it the responsibility of the software vendor, the user or both? And what if a client accesses an AI service directly?

All of this raises the question of whether a professional quality standard for the use of AI in legal and other professional services is required to address these issues?

Despite all of the potential benefits AI can undoubtedly bring, for the time being, there are many unanswered questions about how professionals should use and control it.