Recordsure Head of Research, Simon Worgan, explains some of the potential risks in relying purely on AI for key decision making and the areas it is best placed to deliver value.
With AI powered systems increasingly underpinning the financial services industry, it’s important to consider the risks that sit alongside the benefits in this brave new world. Whereas an error on a consumer shopping device could result in the wrong groceries being ordered, the potential repercussions if an incorrect decision is made on a mortgage application could have a cataclysmic impact on the wellbeing of a customer. How do we hold technology to account in the same way we would a human adviser?
Does AI lack transparency?
With so much at stake within the financial services sector, it is imperative that any reliance on AI solutions is thoroughly watertight before it is used in a live environment. Whilst AI has the potential to improve transparency and customer outcomes, an important consideration is whether AI engines themselves lack transparency.
More often than not, the people actually using AI tools aren’t the ones that built them, meaning they won’t have an in-depth understanding of what makes them tick or where any potential limitations in the capability might lie.
Without careful monitoring, deep learning technology fed only on quantitative data can produce results very different from what was intended. Take Tay for example: this AI chatbot was released by Microsoft in 2016 but had the plug pulled after just 16 hours when it began sending inflammatory and offensive tweets. On the other hand though, building in rules to drive certain characteristics can unintentionally result in prejudices being passed on by accident, with customers ultimately paying the price.
Using algorithms to determine eligibility for financial products or offering advice to customers offers huge opportunities for firms to drive efficiencies and personalise the customer experience, but they also carry the potential to cause far greater harm to the consumer than a tweet if left unchecked.
How robust are the alternatives?
Against this, it is important to weigh up the pros and cons of AI versus the alternatives, namely manual processes. Many of the tasks which AI can be deployed for can also be conducted by human operators.
Whist human expertise is familiar to everyone and people are capable of more complex decision making than AI (currently at least), even the most ardent technophobe would concede that no one is perfect. Whilst humans don’t suffer the risk of algorithmic issues, we come with different limitations: we do sometimes make mistakes, cast inconsistent judgements and misunderstand information we are presented with. Also, humans need time to rest and sleep, meaning we can generally only cover a fraction of the ground that technology can.
Trusting AI with key decisions
Regulators and legislators alike are putting ever-increasing focus on AI, and not without good cause. AI isn’t necessarily at the point yet where it can be left to make complex decisions. In the context of financial services where any decision can be pivotal for the wellbeing of the consumer, any mechanism used to dictate customer outcomes must be thoroughly reliable before it can be used in a live environment.
It is not only the technology which is changing though: our legal framework was largely written in an analogue world, and in many instances the legislation protecting customers is ill-prepared for the digital revolution we are rapidly diving into. Advisers are responsible for the guidance they provide their customers; if they offer poor advice causing financial harm then they are liable. But what if the advice is automated and a customer’s life is turned upside down due to a quirk in an algorithm?
The low hanging fruit
While shifting key decision making onto AI carries significant risks, the true benefits of adopting technology arguably lie more in supporting the human decision maker rather than replacing them. An area where machine learning technology is already thriving is in lower-value decision making and automation. With AI able to power through the burden of administrative chores, human operators are able to free up time to focus their energy on higher level decision making which is more complex and carries greater risks.
A good example of this is compliance teams reviewing customer interactions with support from the Recordsure platform. The end decisions which need to be made are complex and require the high skill set of the reviewer, but many of the tasks reaching that point can be automated, for instance gathering all the necessary content, analysing it for relevant information and flagging significant cases which need follow-up action. Set ups like this can get the best of both worlds: alongside the assurance that high stakes decisions are made by human specialists, the technology allows them to cover ten times more ground resulting in an exponential improvement in coverage and customer support.
Hybrids like this represent an optimal blend of human decision making power twinned with AI to automate the lower-value administrative work. The result is a system that has the power and reach of an AI driven tool but without the assurance of human expertise at all key points. Whilst both the technology and legal framework are not necessarily ready for full automation, combining AI with human decision making is already proving a powerful way of efficiently expanding output whilst simultaneously improving customer outcomes.
Recordsure is conducting a survey to understand how disruptive technology is impacting workers in the financial sector at a team level on a day-to-day basis and whether this reflects macro trends. We’re keen to hear views from across the sector, so if you work within financial services or a similar regulated field you can share your insights via our Disruptive Tech & Financial Services survey.