search

Insurance

Best Practice

Put your Data at Work: Financial Services

The insurance industry is today overprovided in data about customers, trends and competitors. The volume of data available to businesses from both internal systems and external touch points doubles from month to month and the benefits of using analytical applications that capitalize this data stream to enhance sales, marketing, underwriting and risk reduction are unquestionable.

Improve underwriting efficiency in Usage-Based Auto Insurance

Traditional auto-insurance companies attempt to differentiate and reward “safe” drivers for their historical driving behaviours, as accidents rates, mileage driven by night or in highways, time and driving hours with the aim of aligning premiums with empirical risk, based on how policyholders actually drive. Safer drivers pay less, because the insurance company knows how they drive and because policyholders know this, it starts a virtuous spiral where reckless behaviours are avoided in exchange for a discount.

Unfortunately, the black-box service market does not seem having capitalized the recent advances in GPS, accelerometer and telemetry technologies and this, joint with the service offering consolidation, brought to a massive commoditization of Pay as You Drive or Pay Per Use policy terms. It is time for insurers to get back their own customers data, capturing the driving data streaming from vehicles, to develop proprietary insights and to apply discount rates or more targeted and differentiated fees.

This become much easier the greater the granularity of the data acquired but a deep granularity means storage costs too high, so the company only retains 25% of the available data while processing time usually took one working week. Adopting and Hadoop architecture an insurer can retain 100% of policyholders’ geo-location data and process this huge data stream in hours. The speed of the new Big Data analytics allows analysing large amounts of data very quickly and becomes a powerful ally in the quick and deep reconstruction of accidents, by ensuring an effective and immediate contrast against frauds.

New chances for an effective preventative maintenance

Any production downtime is a potential huge loss of revenue to companies due to loss of production output, cost of repairs and waste generated in the process; to minimize this risk manufacturers usually apply programs of preventative maintenance, which largely is a calendar-based approach that calls for equipment to be serviced or replaced at predetermined intervals or periods of time. This could include replacing a component on a specified time interval or number of operations. On the opposite a condition-based maintenance program focuses on the condition of equipment and how it is operating rather than on a length of time or predetermined schedule. With advancement in technology, every engineering device is now embedded with sensors and RFID that can actively transmit vital information about machines variables as temperature, oil level, vibrations, working loads, humidity, production rate, waste metrics and breakdowns. Attaining in a data lake the enormous amounts of machine-generated data logs and combining them with fault settings registered when a robot break down and the related maintenance history, will definitely help in identifying patterns that led to robot failure, shaping the condition of in-service equipment in order to predict when maintenance should be necessarily performed.

Improve underwriting efficiency in Usage-Based Auto Insurance

Traditional auto-insurance companies attempt to differentiate and reward “safe” drivers for their historical driving behaviours, as accidents rates, mileage driven by night or in highways, time and driving hours with the aim of aligning premiums with empirical risk, based on how policyholders actually drive. Safer drivers pay less, because the insurance company knows how they drive and because policyholders know this, it starts a virtuous spiral where reckless behaviours are avoided in exchange for a discount.

Unfortunately, the black-box service market does not seem having capitalized the recent advances in GPS, accelerometer and telemetry technologies and this, joint with the service offering consolidation, brought to a massive commoditization of Pay as You Drive or Pay Per Use policy terms. It is time for insurers to get back their own customers data, capturing the driving data streaming from vehicles, to develop proprietary insights and to apply discount rates or more targeted and differentiated fees.

This become much easier the greater the granularity of the data acquired but a deep granularity means storage costs too high, so the company only retains 25% of the available data while processing time usually took one working week. Adopting and Hadoop architecture an insurer can retain 100% of policyholders’ geo-location data and process this huge data stream in hours. The speed of the new Big Data analytics allows analysing large amounts of data very quickly and becomes a powerful ally in the quick and deep reconstruction of accidents, by ensuring an effective and immediate contrast against frauds.

Analyze Insurance Claims with a Shared Data Lake

Any insurance company cares about minimizing risk and maximizing opportunity balancing the chance to take premiums with the risk of paying claims where a few individuals can cause extraordinary losses if their malicious activities go unnoticed. Insurance companies store and process huge amounts of data but usually them remains isolated in Functional silos while integrating them in Apache Hadoop can deliver better insight to improve operational margins and anticipate one-time events that might cause catastrophic losses.

Definetely any Insurer has already systems for analysing structured data at scale. Unfortunately collected information on a claim-by-claim basis most of the time remain limited to different and separate Functional areas of the company: claims management, administration, financial, fraud management and are rarely used by the actuarial functions to improve pricing models. Less-structured claims notes or social media analysis would add a significative value in better understanding soft links between stakeholders involved in an accident or more generally in a claim management process but it does not fit and scale easily in traditional data warehouses because combining textual or social data with structured data in an RDBMS environment it is not economically viable.

A “schema on read” system as Apache Hadoop is, permits ingest of a much wider range of data types to be easily and quickly unified in a data lake for a much clearer and holistic picture of actual risk. This profound data reservoir can still be analysed using existing business intelligence tools and employee skills, thanks to close integration between Hadoop platform and the most used BI platforms which are now all linked in or, still better, by the new class of data discovery tools . The proposed approach, overcoming the role hierarchies, leads to a unified view of the phenomena providing the opportunity to catch in advance new trends or fraudulent patterns that would not be easily detected by analysing the phenomena through the usual sectorial investigation.

360° Customer view

Every company with the combination of lots of customers and lots of points of customer interaction aspires to build the proverbial 360-degree customer view. Anyway bringing together data from many separate administrative systems, claims systems and other data sources – usually a blend of commercial software products and home-grown apps - it is not so easy, while ripping out, replacing or simply touching these mission-critical systems of record it is usually out of the question.

So how could a big insurance company access information from these diverse sources? NoSQL databases have emerged in recent years as a diverse and scalable option to bring together data captured from dozens of legacy systems storing tens of data terabytes and merge them into a single record which updates in near real time as new customer data is entered.

It is easy to understand the effective boost that a “single record customer view” representing a timeline of customer’s status, transactions, claims and inquiries can bring in term of process efficiency and consequentially customer satisfaction. Moreover the value of an easy integration of semi-structured and unstructured information, such as images of health records, certificates, complaints, administrative or file based information, with transactional data becomes day after day increasingly necessary for the improvement of quality of service; another good argument for NoSQL.

RELATED CONTENTS

Insurers need to quickly understand their business and interpret data to foster innovation, reduce operational costs and provide a platform appropriate for accurate and fast assessment of risk / innovative pricing, for compliance and to leverage new sales channels.

Data Reply supports customers in the design and implementation of data platforms that aim to enhance and capitalise on corporate information assets. Data Reply introduces the theme of personal data protection within the scope of the General Data Protection Regulation (GDPR), which will come into force in May 2018.