Join them for this latest instalment, where they consider how firms can evidence the outcomes of customer conversations in a consistent and sustainable way – and move away from the more traditional, less effective methods. Plus, they explore the drawbacks of random sampling and highlight credible alternatives that can offer QA teams 100% oversight for the first time.
Part 2: The power of tech-enabled process adherence checks to evidence Consumer Duty outcomes
Garry Evans
I’ve had a number of conversations about the Consumer Duty and how organisations are looking to comply and evidence compliance. And a lot of the time, we're told process adherence checks aren't really that important anymore. So, is that fair?
Olivia Fahy
I don't think it is fair. I think process adherence checks are perhaps being viewed as less significant because they can't be solely relied upon to demonstrate or evidence good customer outcomes. But while that is true, there's no single data point that can demonstrate or evidence good customer outcomes.
So, the same could really be said of anything. If you're looking for a silver bullet that says, ‘this one piece of data can give you all the evidence that you need’, then it's never going to happen. And if people are suggesting that there is something that can do that, I would be very wary of it.
I don't think we should discount process adherence checks altogether. They might seem a bit basic, but they can still provide really useful direction for risk-based outcome testing when used in combination with other data points. Particularly where you increase their coverage or their associated data output through the use of technology.
If you tech enable process adherence checks, they become a powerful extra data layer for a Consumer Duty outcome testing data portfolio. And a lot of the time, it isn't really the process adherence checks themselves that are the problem. Processes are a fundamental part of an organisation's governance and procedures. But the sampling process that sits around them and the way that process adherence checks are done (which is normally through random sampling) is possibly more where the problems lie.
I think you're probably right. There's always going to be a strong correlation between process adherence problems and other problems. So, if you're trained to do the job properly and you're doing it properly, then you'll tick all those boxes. There's definitely a correlation between getting something wrong with the process and other problems following because it's a lack of training.
You're absolutely right about random sampling. I've always said that there are two problems with random sampling. One that it's random, and two that it's a sample. And every organisation I talk to kind of agrees, but it's a real uphill struggle to get these firms to understand that there are any alternatives.
It's probably worth clarifying what we mean when we're talking about random sampling. Some firms are still using a completely random sample approach: they just want to get the best representative view across all the cases that they need to check, where they just pick cases at random.
Other firms are using what they would call a risk-based approach as their predominant methodology, which focuses on selecting higher risk products or transactions to review more of. For example, with mortgage reviews - you might review more interest-only mortgages because it’s a higher risk product and therefore, there’s a higher risk of potential poor consumer outcomes. But even though that sort of methodology is including more cases that relate to higher risk products, they are still random: a human still has look at them to determine what's happened/the outcome, and whether it's a red, amber or green rated case.
When we're saying you can shift from random sampling to risk-based sampling, we're talking about giving the ability to get a score before you even need human intervention in the process. You can enhance your processes (whether they have that element of risk-based selection there or whether they are purely random), by using AI to pre-score calls before a human has to look at them. And then, you can target reviews of the cases where the AI has identified that something hasn't gone right.
That means your QA teams can be much more effective because they can spend more of their time reviewing those cases that have potential issues. But there is a bit of an uphill struggle to shift mindsets and step away from the traditional way of doing things.
It's a bit of a misconception that if a firm starts targeting their QA teams at red, high-risk cases, it will sound alarm bells because suddenly their stats show lots of red cases are being checked. But there's actually a much bigger picture: the AI provides a score for every case, so a firm’s overall statistics should remain the same. They just end up with the scores that the AI has given (which is generally the same proportion as you currently have of red, amber, green cases) - but it's based on AI looking at every case, not an extrapolation from random sampling. And then you've also got a subset of those cases - the riskiest cases - that have been reviewed by a human.
It does require a mindset shift, but if it's a decision between saying ‘we're just going to keep things as they are and manually review about 5% of cases because I'm comfortable with that, then I'll get about 2% reds within that sample - that's a number I can live with’. Or, if the alternative is saying, ‘I'm going to use AI to get an initial score for 100% of cases so I've got a full view of my risk profile across all those cases, then I'll be much more effective with the resource I have to target those with the most potential harm’. I know which option I'd choose, and I'm pretty sure I know which the FCA would choose as well.
There’s also the insight then that you're able to generate. If you're only getting a small sample on an even smaller sample of the cases where problems lie, you don't ever get a clear understanding of the organisation’s thematic problems.
Check back here for part three of the series, ‘ Futureproofing for Consumer Duty: Utilise speech analytics to process risks in customer conversations’. Plus, catch up on part one: ‘The role of RegTech to help monitor and evidence Consumer Duty outcomes’.