Jobs Publications Sustainability Innovations
General inquiries
Phone 058 285 85 85
From outside Switzerland: +41 58 285 85 85
Contact
What’s Next for AI and Insurance
Blog What’s Next for AI and Insurance
Sibylle Fischer February 24, 2023
AI-enabled tools and tech are already reshaping the insurance landscape – creating opportunities and challenges for incumbents.

ChatGPT, a product of San Francisco based Open AI, managed to reach 100M active users in just two months, making it the fastest growing application in history while also providing a useful demonstration of the range of disruptive possibilities for conversational AI technology.

For insurers, this is hardly the first time the industry has had to consider the role Artificial Intelligence (AI) and other AI-driven technologies like big data, machine learning and deep learning will play both in its present and future states. 

But as AI finds more and more points of integration within the industry, the likelihood of AI transforming our business seems all but certain – touching everything from claims to underwriting to customer service and the regulatory framework. What should insurers be watching out for?

Conversational or generative AI

ChatGPT, which is built on a machine learning system known as a large language model, is able to produce natural-sounding language after training on enormous data sets of text. ChatGPT understands words in context, making it possible to effectively mimic human speech patterns while leveraging its exhaustive, encyclopedic knowledge. 

As one example of its capabilities, I asked ChaptGPT: “What is the impact of AI on insurance?” and received the following answer almost instantly:

“Artificial intelligence (AI) is having a significant impact on the insurance industry. One of the main ways it is being used is for risk assessment and underwriting. AI-powered systems can quickly analyze large amounts of data and identify patterns that human underwriters might miss, which can help insurers make more accurate
risk assessments and offer more personalized policies. Additionally, AI can be used to automate repetitive tasks such as claims processing, which can make the claims process faster and more efficient. Furthermore, AI can also be used to improve customer service by providing instant responses to customer inquiries. Overall, AI can help insurers to be more efficient, make more accurate risk assessments and provide better customer service.”

As ChatGPT correctly points out, generative AI is sure to play an important role in customer service – improving the user experience of chatbots, streamlining onboarding workflows, and delivering instant information about policies and claims in a customer’s preferred language. It can also act in a supporting role to underwriters who need to analyze troves of unstructured data, for example, finding patterns that could indicate fraudulent activity as part of processing claims and policy applications.

But while its speed is impressive and its expertise broad, ChatGPT still has many noted limitations, including inaccuracies, plagiarism and bias. As you can see from the example above, ChatGPT performs admirably with generalities. But many have observed the chatbot struggle where more nuanced, specialized subject knowledge is necessary. Also, in a highly regulated industry, there is no room for error when it comes to providing advice or misinterpreting the terms and conditions of a product.

Because of this, human oversight and regulation of this powerful tool will likely be essential for most business, research or scientific applications. Ethical concerns in the search space will also be relevant when it comes to proprietary conversational AI products like ChatGPT – developing open-source AI technologies may become a priority to address the risk of creating potential monopolies.

AI training data & regulatory frameworks

AI models are trained on data sets that ultimately help them predict learning patterns. The higher quality the training data is, the more accurately the model performs its function. Without a strong focus on training systems, AI can fail or be deeply flawed.

For example, if the data being used to train an insurance product is biased, the AI will amplify that bias, potentially impacting millions and leading to legal catastrophe. Similarly, chatbots can’t only be trained using raw language data – supervised learning and human trainers are necessary to improve performance and to help correct for biases.


Regulatory frameworks, therefore, must play a featured role in the future of AI, especially as the technology is embedded into an increasing number of insurance products, services, and processes. In addition to matters of bias, privacy concerns over personal data use are likely to be another focus of regulations aimed to ensure the development of AI tools consumers can trust.

Writing the next chapter for insurance & AI

As ChatGPT has demonstrated, the exponential rise of AI-enabled tools, products and processes in insurance is happening all around us. Ultimately, incumbents would be wise to engage with these issues early and experiment in safe, contained environments with limited risk to gain expertise and upskill internal teams, especially around matters involving data privacy, bias, and regulatory frameworks. Doing so will help maximize the opportunities and mitigate the challenges AI holds for our industry.

More news & stories Display All