How AI is Changing the Future of Insurance
by Michael Giusti
From customer service to automated underwriting and even fraud detection, artificial intelligence promises to be both a ground-breaking tool and a disruptive force in insurance. AI’s double-edge potential could both innovate and shake up the insurance industry.
Artificial intelligence in the insurance industry already is, and is going to take many forms. Large language models, much like ChatGPT, promise to play many roles, as will natural language understanding and machine learning. Generative AI, with its ability to create and analyze images, videos and audio also holds huge potential for insurance industry use cases.
While artificial intelligence has been garnering major headlines across many industries recently, the insurance industry has been using AI tools in some form for years.
“From the carrier side we have been seeing predictive modeling since it started to really grow 10-15 years ago,” said Bill Holden, senior vice president of executive perils for The Liberty Company Insurance Brokers. “This is where they use vast amounts of data to make predictions.”
But even more familiar tools, such as telematics devices used in usage-based insurance policies, rely on artificial intelligence underpinnings, as do the basic chat bots found on most insurance company websites.
Emerging Threats
Although AI tools have been familiar assets in the insurance landscape for years, some of the emerging artificial intelligence functions popping up in the market are actually creating new risks that might need to be insured themselves.
Take those large language models for one. AI companies train these models to answer questions autonomously based on predictive text that relies on the data it has been fed. Problems emerge because the AI isn’t designed to answer the prompts with 100% accuracy, but instead are designed to do their best to predict what word would likely come next. And nobody is there monitoring what it says in real time.
That has led to a phenomenon known as AI hallucinations, where the program simply makes up some element of the response out of thin air because it sounded like it would be correct.
If that hallucination was simply a nonsense response, then there is no harm, but there has already been one lawsuit filed in Australia based on the AI accusing a mayor, who was a whistleblower in a case, of being the one who committed the offense.
That kind of published damaging defamation falls into an area of law known as libel, and lawsuit judgements based on those damages can easily run into the millions of dollars. The key to who has to pay out in those cases comes down to who was at fault for the publication, which is where the interesting insurance question comes in.
Because the machine made the statement, was it the fault of the person who asked the question, and thus prompted the publication in the first place? Was it the fault of the company that hosted the AI chat bot’s code on its server? Was it the fault of the programmer? Which insurance company is going to have to pay to defend the lawsuit, and in the case of a guilty verdict, pay to cover the judgement?
A similar question of fault comes in with an AI that drives autonomous vehicles. If an AI driven autonomous drone, or delivery vehicle, or taxi were to crash into another vehicle, or worse, kill a pedestrian, who is at fault? Would it be the owner? The programmer? The manufacturer?
These questions have not been fleshed out in the courts or the statehouse capitals yet.
Generative AI has also sparked lawsuits based on copyright violations. With a generative AI, a company trains the program to create art or music by feeding it examples of existing works. But the original creators of those works have been taking issue with that and are often claiming that their work and styles are being stolen in the process. And they are suing.
Deep fakes present a similar liability question.
As these questions play out, policy language is going to have to be updated in personal lines, umbrella policies, business general liability policies, errors and omissions policies, directors and officer’s policies, media liability policies, and beyond.
“Liability is like a pebble in a pond,” Holden said. “It ripples out, and things you don’t think about come into play.”
Operational Effect
Beyond risks that need to be covered, artificial intelligence has and will continue to change many of the ways insurance companies operate — from the point of contact with the customer, all the way through to the way policies are underwritten and claims are processed.
The most immediate impact will in paperwork. AI’s automation holds the promise to minimize the risk of human error by streamlining services and automating tasks.
Everyday insurance functions, such as filling out forms, filing insurance certificates, checking policies, and any manner of clerical tasks will be shifted as soon as possible to AI tools.
“I know they are already writing briefs,” Holden said. “I guess if it they are not already it will not be too far in the future that they will start to write coverage opinions.”
When customers have historically applied for policies, insurance companies relied on customer-supplied data, some commercial databases, and limited human investigation to aid in the underwriting.
With artificial intelligence, underwriters can use natural language understanding tools to read through things like reviews on sites like Yelp and thousands of public document filings and public records, and scrape social media feeds, all to build profiles on the applicants that can help assess risk.
The next step in artificial intelligence would be to remove the human underwriter entirely and to take that automatically collected data and create an automated coverage decision and rate nearly instantaneously. But that is going to have to be done cautiously to ensure unintended consequences don’t follow.
Through machine learning and modeling, insurance companies could automate many of the tasks that had previously been done through labor-intensive, hands-on processes.
After claims are filed, artificial intelligence can also step in and use generative AI to analyze images and video of damage and interface with sensors. AI can compare that damage and information with policy documents, returning coverage decisions and settlement offers in a fraction of the time a human would take.
Machine learning also holds the potential for detecting fraud by analyzing patterns that might slip past a human reviewer, flagging suspicious claims or behaviors that might signal something isn’t entirely above board.
Many of these tools are here now, and many are being rolled out in stages. Many more are sure to come.
Job Advancement & Downfalls
With all that automation, a lot of people in the industry will be looking over their shoulders to see if theirs will be the one at risk.
In the beginning of the AI revolution, the most tedious and repetitive jobs will be the most likely to be lost, as will many of the front-line customer facing roles that had previously been outsourced to call centers.
With large language models able to return answers and elegant responses, chat bots will continue to play an increasingly important and expanding role. And with generative AI able to understand and create human-sounding voice responses many phone-based customer service jobs that haven’t already been automated will also likely be further outsourced.
But that doesn’t mean every insurance job is immediately at risk.
Anyone who has dealt with an automated call center knows the frustration of asking for a human attendant because the AI just isn’t cutting it.
And though a drone may be able to capture post-disaster damage, and a phone’s camera can relay video and photos to the insurance company’s AI to assess the damage after a car crash, Holden says that there is still something missing when there aren’t people involved in the process — at least for now.
“Until it can emulate emotion and empathy, AI can’t do the claims adjusting on its own,” Holden said. “It still needs to learn its bedside manner.”
Bias and Discrimination
There is a ghost in the machine when it comes to automating insurance roles that were previously done by humans — bias and discrimination. There are strict laws governing discrimination in insurance, but with many of the AI tools, how they make their decisions is shrouded inside a black box.
Many researchers have pointed to systemically racist results that have been produced by AI in many different contexts.
Bob Gaydos, CEO of Pendella Technologies, said that while AIs are able to process much faster than humans, that speed is often ultimately their liability.
“You have to protect it from its speed. Speed is a great thing, but a dangerous thing and AI makes assumptions at a crazy speed,” Gaydos.
He said that if an AI has an assumption that might hint for it to make a biased coverage decision, the nature of AI means that it is going to compound that hint and double down, turning that hint into an overblown bias.
A human might rely on wisdom or experience to realize that biased decision was wrong, ill informed, or even illegal, but steps are going to have to be actively taken to ensure the AI doesn’t discriminate.
Underwriters are going to have to be especially conscious of the implications that automated underwriting can have when it comes to bias and discrimination. If they don’t, they will invite an avalanche of political oversight and regulation.
Already, Colorado is proposing a regulation to prevent AI-driven discrimination in insurance.
“The political door is open with Colorado. State regulators are going to say, ‘if you are using AI, you are going to have to show us how you are going use it,’” Gaydos said. “But that will open Pandora’s Box.”
The Future of AI & Insurance Industries
AI is already good at automating the repetitive and predictable tasks in the insurance industry. But for now, the human touch is still needed. But as AI improves, more complex tasks will continue to be handed off, perhaps opening more opportunities for oversight roles and efficiencies.
From a customer standpoint, the AI future is a dream of an automated, frictionless experience. Getting in a car that tells you the different real-time insurance rates for different routes to work that morning, based on traffic and road conditions. Then if there is an accident, having the claim processed automatically with the click of an app, and the car drive itself to the shop while a replacement car finds its way to the driveway without having to take a moment off work.
That world of more usage-based and real-time pricing is almost a certainty as data and feedback models become more available and help drive decisions in real time.
Risk management and mitigation will play as much a role in the customer service realm as underwriting does now, with aerial images coupled with generative AI processing analysis providing agents information to help their customers head off things like roof leaks before they happen.
But for now, things are changing fast, and AI potential seems to be everywhere.
Michael Giusti, MBA, is senior writer and analyst for InsuranceQuotes.com