We asked 20 insurance leaders how they have built trust in AI for claims processing among staff and customers. A few themes emerged in their answers: addressing resistance, ensuring transparency and accuracy, ethical considerations, and aligning AI with human expertise. The interviewees were from general insurance, Lloyd’s and the London Market, plus claims adjusting services such as TPAs.
Key takeaways: Building trust in AI
- Ensure AI systems are explainable, auditable, and openly communicated.
- Engage diverse teams across claims, IT, and compliance for a unified approach.
- Use high-quality, representative data and ethical guidelines to minimise errors and bias.
- Train employees to work with AI, highlighting its role as an enabler, not a replacement.
- Maintain strict compliance with privacy and data protection regulations.
Here’s a closer look at what was said.
1. Human resistance and mindset shifts
“There’s a mindset among claim handlers that AI might put them out of a job, but it actually opens up a whole world of opportunity.”
- Martin Turner, Chief Claims Officer, AXA XL
“Building out user stories, looking at the process end to end and mapping that out really helped us refine the AI’s performance.”
- Senior claims leader
One of the biggest challenges in adopting AI for claims processing is overcoming resistance from claims teams. Many claims handlers are worried that AI could threaten their jobs or undermine their expertise. These fears stem from a broader uncertainty about how AI will integrate into the industry and whether it will replace human judgment entirely.
To address this, engage claims teams early. Organisations that successfully implement AI often start by involving diverse groups of employees in discussions about purpose and potential. Clear explanations of how AI works and demonstrating its ability to streamline routine tasks while preserving, or enhancing, human oversight can help reduce fears.
Training programmes tailored to claims handlers can help with this transition. When employees understand how AI supports, rather than replaces, their roles, they are more likely to see it as a valuable tool. Encouraging team members who are naturally enthusiastic about technology to act as early adopters can also build momentum. By sharing examples of how AI improves efficiency and accuracy, these advocates help build trust among colleagues.
Ultimately, the goal is to shift the perception of AI from a threat to an opportunity. Claims handlers who see AI as a means to reduce repetitive tasks, allowing them to focus on complex or high-value cases, are more likely to embrace it. Open dialogue, transparency, and practical demonstrations of AI’s benefits are key to achieving this shift.
Read more: How automated claims processing enhances customer experience
2. Transparency and traceability
“AI systems appear untrustworthy when they operate as a ‘black box,’ where decisions are made without visibility or oversight. Insurers need to ensure every decision can be traced, audited, and assessed for compliance with ethical and operational standards. Without this, it becomes impossible to ensure the AI is working as intended or upholding organisational values.”
- Claims Director, Personal Lines
A lack of transparency in AI systems is another barrier to trust in claims processing. Many AI models are “black boxes,” producing decisions without clear explanations of how outcomes are reached. For claims teams, this opacity can raise concerns about fairness, accountability, and the potential for errors.
To overcome this, insurers should prioritise explainability. AI models should provide clear, comprehensible insights into their decision-making processes. This not only reassures claims handlers but also allows them to audit and, where necessary, override decisions to ensure fairness and accuracy. Tools enabling this level of visibility are essential for fostering trust.
Another key aspect of transparency is clear communication with stakeholders about how AI is being used. Policies that outline the role of AI in claims processing, and sharing examples of its benefits, can demystify the technology. Explaining how AI reduces bias and boosts consistent decision-making reinforces its value to both employees and customers.
Traceability is equally important. Claims teams need confidence that decisions made by AI can be reviewed and verified. Insurers should be able to audit AI outcomes, ensuring every decision is backed by data and can withstand scrutiny. This enables teams to meet internal governance requirements and build confidence in AI’s reliability.
3. Accuracy, bias, and ethical AI use
“When there are nuances around race, gender, ethnicity, and different cultural values or norms, and how these may impact a claim, this should be considered in terms of how that is feeding into the [AI] model in order to give out an accurate and reliable result.”
- Senior Claims Operations Lead
The reliability of AI in claims processing hinges on accuracy and fairness. Poor-quality or incomplete data can lead to errors, such as rejecting legitimate claims or approving fraudulent ones. Bias in training data can make inequalities worse, undermining trust in AI and the organisations using it.
To ensure accuracy, AI models need to be trained on diverse, high-quality datasets that reflect the wide range of scenarios encountered in claims processing. Regular monitoring and updating of models helps maintain performance as market conditions evolve and new regulations emerge. This reduces the risk of errors and ensures AI consistently delivers fair and accurate results.
Read more: Generative AI in the insurance industry
Insurers should assess how factors such as race, gender, or cultural norms might influence the data used to train AI models. Without careful oversight, these biases can skew outcomes, creating unfair disadvantages for certain groups. Establishing strict data governance practices and conducting regular audits will mitigate this risk.
Ethical considerations must also guide the use of AI in claims. Organisations should adopt clear ethical standards to ensure AI aligns with their values, prioritising policyholder welfare over profitability. Transparency about how sensitive customer data is handled and a commitment to privacy-by-design principles are key to maintaining stakeholder trust.
By focusing on accuracy, reducing bias, and adhering to ethical practices, insurers can ensure that AI supports equitable and reliable claims processes, strengthening trust among employees, customers, and regulators.
Read more: Art of the possible: How Sprout.ai transforms claims
4. Cultural and organisational change
“Bringing the right people together with the requisite skills and the ability to communicate effectively is probably the greatest challenge.”
- David Fineberg, Head of Claims, Generali UK
Integrating AI into claims processing is as much an organisational challenge as a technical one. Resistance to AI is not limited to claims handlers. Compliance, IT, and risk teams often have concerns about security, accountability, and how AI works with existing systems.
Engaging key stakeholders early is important. Involving compliance, IT, and risk teams at the outset ensures their expertise is factored into AI plans, helping identify and address potential issues before they escalate.
Read more: Just how difficult is it to integrate AI with legacy insurance systems?
Demonstrating AI’s value through small, targeted projects can also make things easier. By focusing on quick wins, such as automating routine tasks or improving processing times, organisations can show tangible benefits that build confidence among employees. Success in these areas often inspires wider acceptance of AI and its potential to enhance claims handling.
A culture of transparency and learning further supports this shift. Claims teams are more likely to accept AI when they see clear evidence of its capabilities and understand how it complements their work. Ongoing training and open communication about AI’s role help demystify the technology, reducing resistance and creating a sense of shared purpose.
Read more: How to measure ROI of AI in claims processing
5. Ethical and regulatory compliance
“It’s important to ensure models are traceable, auditable, and that there are no false positives in the reading of the data.”
- Scott Cadger, Head of Claims, Underwriting and Product Management, Scottish Widows
Employees, customers, and regulators need assurance that AI systems operate transparently, fairly, and in compliance with relevant laws. Without this, trust in AI, and in the organisations deploying it can quickly erode.
AI models must be auditable and traceable, ensuring every decision can be explained and justified. This is needed to meet internal governance standards and external regulatory requirements. Regular audits and independent validation of AI systems deliver further confidence that decisions align with ethical and operational expectations.
Data protection is another key area. Insurers should prioritise privacy-by-design principles, ensuring personal data is anonymised and securely handled in compliance with regulations like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). Failure to safeguard sensitive information risks not only legal penalties but also significant reputational damage.
Fairness is equally important. AI systems should be designed to minimise bias and prioritise fair outcomes. This requires careful scrutiny of training data and decision processes to ensure they do not unintentionally disadvantage certain groups or individuals. Organisations should also establish clear ethical guidelines, ensuring AI is used to support policyholders rather than prioritising short-term cost savings.
Read more: 9 essential features to look for in AI claims processing platforms
More insights from insurance leaders: A business case for AI in claims
We carried out this research because AI can make a huge difference to the experience of insurance customers. It helps insurers deliver faster, fairer results, and provide sympathy and attention when it’s needed. However, we understand that getting started with AI seems like a daunting task. Get in touch to find out why it’s simpler than why you might think, and how much of an impact it could make at your organisation.