Building transparency and trust in AI

In December 2020, the White House issued an Executive Order on the use of “trustworthy AI” across federal government agencies. It serves as the biggest indicator yet of the growing public awareness and concern regarding what we term ethical AI.

The role of AI in various industries, as well as government agencies, is becoming more significant with every passing month. Yet while public opinion is still so strongly influenced by five decades of science fiction, it is understandable that trust issues and a suspicion of the unknown can be causes for concern. Some of these were outlined by Tech Crunch in a piece they published at the end of last year.

Sprout.ai is one of the first companies in our industry to have a dedicated AI ethics lead. 

Just as we have a dedicated data protection officer to oversee compliance with GDPR regulations, we take our current and future AI accountabilities equally seriously. Here, we explore what that really means and also address some of the concerns raised in the Tech Crunch article.

Coming out of the black box

The problem is that previous generations of AI have been built around deep learning black box processes. The algorithm takes millions of data points, correlates specific features about them and draws conclusions. This leads to inevitable concerns that there is the potential for bias. Human nature is inherently suspicious of black box decisions and users are no longer prepared to blindly accept that the AI knows what it is doing without better transparency.

The technological solution lies in new data pipelines of the type currently being developed here at Sprout.ai. Conceptually, these work in much the same way as a relationship database. Different data nodes are all interlinked, and they can be clearly, intuitively and transparently communicated through knowledge graphs.

Bringing AI into the 2020s

For both corporations and government agencies, the real challenge lies in switching from these incumbent AI processes into the new world of causal data pipelines. Black box systems simply do not have this functionality, so the new virtual infrastructure that underlies next generation algorithms needs to be built from scratch.

That’s easily said, but it presents a genuine headache from a practical perspective when businesses are using existing systems on a daily basis. It is a scenario that we have seen time and again over the past 40 years with businesses that have invested heavily in legacy systems that rapidly become obsolete. The reality is that it would take hundreds of engineers months or even years to bring their AI processes into line with new generation AI. 

Contextual AI leading the way

In that sense, it becomes clear that while previous generation AI companies have played an important role in bringing AI technology into the mainstream public arena, they have also taken something of a misstep. The time has come to get AI back on the right path as we negotiate the AI forest, one that is user friendly and transparently ethical, and one that facilitates both supervised and unsupervised learning to deliver the most powerful results.

This philosophy lies at the heart of the Contextual AI solution we have developed at Sprout.ai. As the name intimates, it provides the context behind every recommendation produced by the AI engine. Having that context in place means that every decision can be reviewed and audited by human eyes and can be traced back to its source.

As well as delivering leading edge results, this capability has earned the trust of our users, trust that was in such short supply with previous generation solutions. Insurance claim handlers are confident that they can safely rely on the tool to do the groundwork, leaving them to step in with their intuition and expertise when it matters most.

Even more than that, though, the solution gives them a platform to push back or disagree with the recommendations. In this respect, human and machine are working like any good team. The process of questioning and providing feedback means the AI can take human opinions on board and get better at what it does.

Ethics and accountability

Ethical AI provides another example of how Sprout.ai is striving to be at the forefront of AI innovation, not just in technology development but also in the philosophical considerations that surround it. After all, AI is not intended to supersede humans, but it is does play a vital role in making many of our lives easier, more efficient and more fair.

Share on facebook
Share on twitter
Share on linkedin

Related Articles

Sprout

What do self-driving car technology and insurance claims have in common?

As we advance into the 2020s, the use of AI is becoming increasingly commonplace and touches on almost every aspect...
Read More
Sprout

Building transparency and trust in AI

In December 2020, the White House issued an Executive Order on the use of “trustworthy AI” across federal government agencies....
Read More
Sprout

Looking back on an extraordinary year

2020 is a year that none of us are going to forget, and we are all looking forward to better...
Read More
Sprout

The New World of Contextual AI

Contextual AI – in search of the holy grail Over recent years, AI has become part of our everyday lives....
Read More
Sprout

Pushing boundaries on Explainable AI

Artificial Intelligence is a term that covers a broad range of tools and methodologies. We all interact with AI to...
Read More
Sprout

What is the real impact of the Covid pandemic on the insurance sector?

The events of 2020 have touched every single one of us and the impact has been felt as much professionally...
Read More
Sprout

Sprout.ai making big steps in OCR

Optical Character Recognition (OCR) is one of the most important areas of insurtech innovation affecting the health insurance sector right...
Read More
Sprout

Sprout.ai welcomes four new team members

The first half of 2020 has brought about unprecedented challenges to businesses across the sector. While Sprout.ai has been no...
Read More

BOOK A DEMO