From Art to Science: New Uses for Insurance Data
According to Best’s Review, a recent “explosion” of analytics advancements is “shifting underwriting from an art to a science.” In terms of risk, we’re talking better visibility and better predictions. But there is a little thunder behind this silver lining: over-reliance on digital tools presents a problem, and privacy concerns persist.
With that in mind, let’s take a look at some new ways in which insurance data is being put to work – as well as some pitfalls to be aware of along the way.
New data brings old risks out of the shadows.
It’s one thing to know that cars hit a lot of deer in Pennsylvania. It’s quite another to be able to analyze and respond to that risk in a meaningful way.
Best’s said that Kassie Bryan, the head of P&C Solutions for Swiss Re, recently “incorporated deer population density data into [the company’s] Motor Market Analyzer.” With predictive modeling based on granular data, Swiss Re has refined their regional understanding of deer collision risk, both in frequency and severity.
Nontraditional data sources such as this have the potential to revolutionize underwriting nationwide, offering new vantage points to bring long-standing questions into the sunlight.
New tools are changing the process of risk prediction.
Underwriting has a human touch. According to Ari Chester of McKinsey, Best’s said, the conventional model is founded on the expertise of those who’ve built their experience and judgement slowly over a period of years.
There is value in that. As analytics continue to proliferate, however, some of the experience-based “art” behind an underwriter’s decision-making will give way to automated or digitally augmented processes.
Ideally this would result in a collaboration between human and machine, in which art and science are pulling together. Yet no matter how it shakes out in the coming decades, we can be sure that the processes currently driving risk prediction are going to transform a great deal, as digital information finds new uses.
Evaluating data is good; blindly trusting it, not so much.
Many routine functions can be safely entrusted to an algorithm to sort out. Some, however, require a human mind. The goal, then, is to minimize unnecessary human inputs – to let the machine do what it does best – while making sure that when a question does deserve an expert’s thinking, it gets it.
As Chester pointed out, “the answer provided by the algorithm is not necessarily accurate.” It’s important to bear in mind that at this point, the purpose of analytics is to support decision-making, not replace it.
Privacy concerns have yet to be settled.
Insurance is a data-heavy industry, and always has been. Yet now that the volume and granularity of the data available have grown so great, our industry has some poignant questions to tackle about consumer privacy.
Moreover, there are few firm guideposts. That’s because regulatory oversight is a patchwork, Best’s said, and consumer opinion is a moving target.
Generally speaking, the public doesn’t like the idea of insurers extracting personal data from their digital doings: their browser history, personal email, use of social media, and so on. That said, many policyholders are amenable to exchanging certain data for premium discounts.
Given our currently-evolving regulatory landscape, as well as consumer trust and the impact it has on brand equity (read: long-term profitability), it makes sense to grapple with the ethical questions sooner than later, and to exercise caution in how we navigate – and protect – policyholder privacy.
Looking for a policy administration system that lets you store new types of data, and build new business rules? Silvervine 6.0 delivers. Request a demo to learn more.