The appeal of bringing in AI as a quick win on existing systems can bring longer term issues
The insurance industry is driven by data, from underwriting to claims to pricing, as well as customer interactions, marketing and products. The exponential growth in the volume of structured and unstructured data available to insurers has meant that the historically slow-to-adapt sector has needed to make faster, more informed decisions and to operate more efficiently. The challenge so far has been how to harness the potential of all of that data to create a competitive, data-driven advantage and serve customers better.
Most insurers understand the transformative potential of AI and other big-data technologies and how it is now central to all financial services. However, few have real clarity on how to ensure its ethical use and who is responsible for these processes should things go wrong.
The Chartered Insurance Institute’s newly-released insurance, tech and data report looks at why and how regulatory requirements such as GDPR and developments across finance, technology and data impact the insurance sector and how this is viewed by consumers and SMEs. The insurance sector has longstanding “trust issues” and the industry has in the past raced towards using tech without first defining how to measure fairness and trust from a consumer perspective.
Without a coherent strategy of accountability for the governance of data and AI, the insurance sector puts itself and others at financial and reputational risk. In a highly challenging consumer landscape and the ongoing cost-of-living crisis, retailers simply cannot afford to lose customer trust.
The industry needs to be aware that despite the ream of positive news stories around AI and automation, the hype around customer-facing functions, like first notification of loss (FNOL) and fraud management may not live up to consumer reality. In a tech-enabled, value-led society If a consumer has a negative experience, trust is easily lost.
For example, in May 2021, AI-powered insurance provider Lemonade, was forced to issue an apology after a series of social media posts described how they used AI to reject claims. Its AI used non-verbal cues (such as eye movements) to decide whether claims were fraudulent. Customers were also not made aware that their biometric data was being collected and used to determine the outcome of their claim or the decision-making process behind this. This resulted in backlash on social media, resulting in Lemonade’s deleting their posts.
This raises questions around how to ensure AI is used ethically and in ways that enhance rather than diminish trust and customer confidence. It is critical that insurers put in a framework for the ethical use of data and AI to assure customers that the data and algorithms used to make critical decisions are not biased or otherwise untrustworthy.
Lindsay Lucas, CEO of data and software provider Software Solved, said: “If historical workflows and processes are not addressed then the issues being discussed around AI will be confounded. As we have seen through historical stories about learnt bias within some of the early AI stories (the “Amazon CV selection being one that jumps to mind), whilst it may be tempting to use the wealth of historical data some of these organisations hold, then unless the inherited bias is dealt with first, the AI will unfairly select and determine decisions. Often the appeal of bringing in AI as a quick win on existing systems can bring longer term issues and therefore more savvy insurers should be looking internally at their systems, processes and data quality ahead of layering in any AI.
“You have to be confident in the quality and accuracy of the data held in order to trust the decisions then being made on that data. Fundamentally the technology can be incredibly useful when used on accurate, current and unbiased data sets, but often the work needed to make systems more robust, deal with technical debt that can lead to poor data processing and cleaning up bad data or being brave enough to ditch it are not popular because of the lack of immediate ROI or fear. These actions are so incredibly important to building tech you can trust and yet they are the behind the curtain activities that only the more forward-thinking companies undertake because they know failure to address it, will result in poor decision making data in future.
“It is important that AI is fully understood and organisations do not buy in ‘black box’ style technology that doesn’t align to their own decision making processes. Ultimately the AI should be there to augment a team and provide some of the processing or information that would have traditionally been delivered by a human being, but it shouldn’t completely remove the human element. Organisations should still be able to decode an AI decision and validate it for themselves, so when anomalies happen, they can be dealt with in a human way that builds trust – no one wants to be back in the 90’s hearing “computer says no” with no other option provided.
“The human element is the biggest barrier to the expansion of AI, where AI is being used to enhance information or make life easier for people, they will embrace it, value it and engage. However, if people feel watched, monitored and scrutinised because of the use of AI, they will look for ways to outsmart the systems, which in turn will lead to bad data, poor AI decision making due to incorrectly learnt biases and ultimately disengaged people. I agree that regulation of the use of AI is the natural way to go here if it is going to be embraced as a change for good across the industry. Many companies are already using AI for good, but for those using it more intrusively and failing to be transparent about it just further substantiates the need to regulate in order to build trust in these systems which can be transformational for organisations when used in the right way.”