Insurers’ Big Data Plans Could Fail if They Don’t Get Basics Right: Mair of Atticus DQPro

June 13, 2018 by Nick Mair

Executive summary: Many major insurers still rely on manual or semi-automated data quality checks, which are time consuming, costly and have a high margin for error. As a result, before they pursue advanced data projects such as artificial intelligence or machine learning, they need to have confidence in their data basics, writes Nick Mair, CEO and co-founder of Atticus DQPro, a London-based data monitoring and compliance platform for global re/insurers.

Artificial Intelligence? Machine Learning? Thinking big is great, but before insurers begin to pursue such advanced big data projects, they must not lose sight of the fundamentals. They must first improve their data entry. After all, big data tech requires a firm base of fundamental data quality in order to function correctly; it’s the old adage: “garbage in = garbage out.”

The current debate in insurtech is around how data generated from emergent technologies, such as Artificial Intelligence (AI) and the Internet of Things (IoT), can be harnessed alongside existing carrier data to produce new insights on risk, pricing and customer engagement.

Insurance is transitioning from being a data-generating market to a data-powered market. Clearly, data is no longer a by-product of selling or administering insurance, it is now the key driver of business development and operation. And it will only continue to play a greater role in the future once data hungry technology such as AI and machine learning become more mainstream analytical and operational tools for the global re/insurance industry.

But are we getting ahead of ourselves? It’s one thing firing out buzzwords like AI or predictive analytics or debating the different applications and ramifications of new technology, but it’s quite another to implement them in practice.

It would surely be fruitless to apply an advanced machine learning algorithm to analyze data sets and look for trends and opportunities if you are not confident in the quality of the data fundamentals that it will be learning from. Just one data point entered incorrectly at the start of a process can skew the results, and ultimately cast doubt on an entire model or program.

High quality data fundamentals are the backbone of every modernization and technology initiative, both within individual companies and for the market as a whole. But at present, many leading specialty carriers still struggle to have a clear view of what data is being stored and actually used across their operation.

In fact, and rather embarrassingly, far too many major carriers still rely on manual or semi-automated data quality checks, which are not only time consuming and costly, but also leave an unacceptably high margin for error.

This ad-hoc approach to data is not likely to aid insurers build efficient processes, which they must do, or face disruption from companies that are able to adopt the best insurtech developments to improve the customer experience – when they buy coverage and when they submit a claim. After all, insureds have high expectations of digital services, from banks and e-commerce companies, which offer personalized online solutions. Therefore, it is not enough that insurers maintain the status quo.

Ultimately, those companies and individuals tasked with overseeing the implementation of any AI or similar project will be responsible for the output of the technology. As such, we can expect issues around the quality of the data to come under more regulatory scrutiny, particularly if decisions that directly impact clients – such as whether to challenge a claim – are being made by AI rather than humans.

Data quality is key to compliance present and future – and again, a data-powered approach, rather than simply a data-generating approach, is the key to evidencing a quality data set with the correct controls in place. Becoming compliant and maintaining compliance in a shifting regulatory landscape need not be a complex process for insurers, even for those operating across multiple territories, such as carriers with U.S. operations underwriting in different U.S. states.

Clearly, global carriers must move away from manual, ad-hoc processes and establish automatic data checks to monitor and evidence their compliance. For example, to verify all incoming policy coding and check that business is not being written in territories which are subject to sanctions, or to provide a Solvency II audit trail, or evidence of compliance with other regulations such as Lloyd’s standards in the specialty London Market.

Poor quality data not only affects an individual insurance company, it also can cause contagion further downstream for any partner companies when databases are shared or integrated. Applying the same data integrity checks across multiple platforms helps to identify issues upstream before they cause errors or incur cost in secondary systems.

The global insurance market is generating data at an exponential rate, but there is still a very high proportion of duplication and unnecessary manual data entry, resulting in an unacceptable margin for error. The daily back office cost caused by disorganized, remedial workflows is significant.

Surely it is time we question the value of continuing to use manual, people-intensive methods and spreadsheets to check for data errors often created by other people and spreadsheets. It is time the industry recognized the need to act and invest now to ensure these essential issues are put right, before ushering in the big data technology era – at mind boggling cost – and relying on its outputs.

It is completely in the control of carriers to ensure they get these fundamentals right, with minimal cost and disruption, and move into an insurtech future with data confidence.