Bias in Insurance Underwriting: Does AI Help or Hurt?
In insurance, pricing varies with risk, and risk is a pretty situational concept. Different people face different degrees of it, and “no two people are entirely alike,” said Daniel Schreiber at Insurance Thought Leadership. Individuality is part of the picture. The problem is, when making individualized decisions, bias is sure to play a role in the process.
Ideally, we all want to live in a world where discrimination doesn’t exist, but in reality, it does exist – and its systemic presence plays a role in just about every sphere of life. Against this backdrop, artificial intelligence offers a potential panacea, eliminating bias in underwriting through a uniform formula that could introduce “truly individualized” risk assessments, Schreiber said.
That, at least, is the promise.
“Concerns regarding racial or gender bias in AI have arisen in applications as varied as hiring, policing, judicial sentencing, and financial services,” said John Villasenor at Brookings. There are three important challenges:
- AI is only as impartial as the data it draws from. Marginalized groups may be deemed less creditworthy, for example, because “evaluations of creditworthiness are determined by factors including employment history and prior access to credit—two areas in which race has a major impact.”
- AI can amplify bias as the algorithms evolve. We use the word intelligence for a reason: machine learning is a dynamic process – which has the ability to take a molehill and make a mountain of it.
- AI isn’t socially savvy. Sometimes it’s appropriate for insurers to make demographic-based decisions, pricing people differently according to age or gender. In other cases, it’s against the law. Intelligent machines may not always get it right.
“It is not news that, for all its promised benefits, artificial intelligence has a bias problem,” said Villasenor.
Case in point: according to Alison Jimenez at American Banker, about 14 percent of “Hispanic households in the U.S. are ‘unbanked,'” along with 17 percent of African-American households. “In some cases, these adults do not have a current, government-issued photo ID,” Jimenez said. “As a result, bank AML algorithms may flag frequent but innocuous prepaid card transactions for Hispanic customers and banks could file disproportionate reports with FinCEN on “suspicious” IDs for African-Americans.”
These are not acceptable mistakes. So, how to tackle AI bias? Bernard Marr at Forbes recounted three tactics:
- One answer is to look at how the algorithms are coded, and address anything that may perpetuate bias. Important note: be sure the team in charge of the review process is diverse.
- Another is to design the algorithms to detect biases proactively.
- Finally, and most fundamentally, we need to review how we as a society make decisions, uprooting our own systemic biases first. “We can’t expect an AI algorithm that has been trained on data that comes from society to be better than society – unless we’ve explicitly designed it to be,” said Dr. Rumman Chowdhury of Accenture, according to Marr.
As Jake Silberg and James Manyika from McKinsey & Company pointed out, “AI can help reduce bias, but it can also bake in and scale bias.” To make sure that intelligent machines are helping society, not further perpetuating its problems, we’re going to need a lot of intentionality at the root.
As you fine-tune your underwriting business rules, a nimble policy administration system can make all the difference. One reason that customers love Silvervine 6.0 is “what-if” scenario / rating model simulation. Discover more reasons to love Silvervine 6.0 here.