When Software Causes Accidents: Auto Insurance Implications

self-driving-cars

Is innovation outpacing insurance? According to the NY Times, the answer is yes. “Advances in self-driving car technology have gotten ahead of insurers’ ability to factor the systems into auto premiums.” Translation, self-driving cars are still being insured like traditional vehicles, where one driver’s insurer pays for the other party’s injuries or damages. The problem is, in this case there aren’t just two parties: this driver, that driver. There’s an invisible third party in the car as well: the software.

When it’s the software’s fault

This May, a Tesla car running on autopilot drove under a tractor-trailer in Florida, fatally killing a man who was using it. Tesla described the incident as a system failure, and according to The Guardian, the company is still trying to understand what caused it.

So far, incidents like this have been rare – much rarer, in fact, than fatalities in conventional cars. “Among all vehicles in the US, there is a fatality every 94 million miles,” Tesla said. “Worldwide, there is a fatality approximately every 60 million miles.” By contrast, Tesla logged 130 million miles on autopilot before its first fatality struck.

Still it raises the question: if car A collides with car B because of the software in car A, under the conventional system, insurer A would be responsible to pay damages to party B. If insurer A wants to recoup that expense, they would have to file a separate claim against the manufacturer.

Pretty clunky.

Also, it’s not that simple. Even if the software has clearly malfunctioned, is the driver really off the hook? According to Tesla Spokeswoman Khobi Brooklyn, no. A Tesla car running on autopilot is not an autonomous vehicle – even if, to the driver, it feels like one – and it “does not allow the driver to abdicate responsibility.”

Are we expecting too much?

When Google tested its self-driving prototype on its employees several years back, it found that the last thing people were inclined to do in such a car was to fold their hands and look straight ahead. “Within about five minutes, everybody thought the car worked well, and after that, they just trusted it to work,” said Chris Urmson, the program head. “It got to the point where people were doing ridiculous things in the car.”

Such behavior is considered a misuse of this technology. But let’s be realistic. Can humans be expected to act like they’re driving when they’re not? Even though, as Tesla has said, their owner’s manual is clear about the limitations of the autopilot feature, expecting people to abide by what they know intellectually when it contradicts the sense of safety they feel in their skin may be overly optimistic.

These are sticky, open-ended questions. The technology is young, and will continue to evolve, not only to provide a better user experience but to reduce risks such as these as well. Where it lands – what the risks prove to be in five, ten or 20 years out – is unclear.

In the meantime, insurers must be engaging the same questions, working in parallel to address the issues we’re looking at now, as well as those we’ll be looking at several years from now. What are the risks? Who is responsible? Does the answer account for all the factors?

With autonomous vehicles on the rise, sooner or later, these questions will land on your doorstep in the form of a claim. When they do, will you be prepared? How about your insurance software?

If you’re in need of a nimble, agile platform that easily adapts to the changing landscape of insurance, look no further than Silvervine. Download our Losing Your Legacy report here.