However, content creation and content analysis are two very different things as Recordsure’s Senior Product Manager, Olivia Fahy explains in our latest video. And it’s through analysing customer’s conversational content that QA teams can embrace the pioneering processes of ConversationReviewAI – in turn unlocking a smarter way to complete assurance reviews. Plus alongside Recordsure’s Chief Product and Commercial Officer, Garry Evans, Olivia also explains the fundamentals of tone and sentiment analysis and how this relates to one of our flagship products.
Then in the final part of this fourth instalment, Garry explores the efficiency and effectiveness savings that firms can achieve when they adopt ConversationReviewAI’s innovative tech-enabled approach to analysing customer calls.
Part 4: Transform your Quality Assurance to innovate beyond random sampling
Garry Evans
Another question that we hear a lot and sparks a great deal of interest in the context of vulnerable customers, is around sentiment and tone analysis. Do we have sentiment and tone analysis in ConversationReviewAI?
Olivia Fahy
It’s a very interesting topic. I think tone and sentiment analysis largely relies on keyword spotting or quite a crude analysis that predicts whether the voice that's speaking is positive, negative or neutral. And that's quite hard to do because we all speak differently. Unless you've got a baseline for how every single person speaks and whether they were feeling positive, neutral or negative, it's very hard to determine across the board an individual’s sentiment or tone. It can end up increasing workload rather than reducing it.
Sentiment analysis models are mainly trained on public datasets, like YouTube videos, tweets, Amazon or IMDB reviews. So largely it’s first-person content such as people writing comments, tweets and things like that. Therefore, it’s generally not that relevant when you apply it to an actual conversation between two people in a financial services scenario.
But having said that, if it is something of interest to clients it’s an add on to the product that we can offer. It's not part of our core product currently, but we are constantly looking at what the techniques for analysis of tone and sentiment are that are out there. And once we've found something that we're confident aligns with the high standard of output for the rest of our core product modules, we'll build it into the platform and the fundamentals of the product.
And you can't use the term Artificial Intelligence these days without being asked about ChatGPT. So surely that must be incorporated into ConversationReviewAI as well?
ChatGPT or generative AI is about creating novel content. I'm sure a lot of people have gone on to ChatGPT and said ‘write me a poem in this style’ or ‘describe this to me in this style’ and it does that. It creates something new in a in a style that it can find a record of.
But the way we use AI is to understand content, which is a key difference. So because of that it's not something that would add value to the core purpose of our product. Adding ChatGPT wouldn’t really take our product any further. If it evolves to do so, then we'll absolutely utilise it because it’s an incredible piece of innovation. But we don't want to want to build it in for the sake of saying it’s included. We'd want to have a really good reason that it would add value to build it into the core product.
Absolutely. And it's everywhere. But it doesn't do everything. We have to make sure that we are incorporating the appropriate things into our products, they have to be right.
Now I will focus on a question for what this means in real terms as we talked about efficiency and effectiveness savings - I just want to bring this a little bit to life with my Chief Commercial Officer hat on.
The current problem for many QA teams is they’re only reviewing a random sample. For instance, if a firm reviews a 10% random sample, they’ll then review 100% of that random sample and might find that only 8% of those reviewed calls are bad. This is inefficient, with only 8% effectiveness for a QA resource. In essence through this example, only 8% of the QA team’s time was spent on calls that actually required any help or had a problem - effectively wasting 92% of their time. Whereas if you use ConversationReviewAI, the sample size is 100%.
By using AI, ConversationReviewAI analyses calls to score whether there is a problem. And then a firm can utilise their QA resource to review the AI identified bad calls. For example, if you’ve previously found 8% bad calls through random sampling, it's still likely to be 8% of bad calls that you’ll find whether you look at a 100% sample or a 10% sample. You’ll go from a 10% sample and 100% of FTE, to an 8% sample which should mean 80% of FTEs needed to review those calls. That would be the case if the calls took as long to review using ConversationReviewAI – but they don’t. Instead, ConversationreviewAI’s Classified element will double your QA efficiency as it reduces the amount of time each call takes to review by half.
So now you go from a 10% sample taking up 100% of your resource and only looking at 8% of the calls - to using 40% of the resource to look at all of your bad calls. So 10% of bad calls - 100% of your FTE through random sampling, or all of your bad calls, 40% of your FTE through ConversationReviewAI. It's a very straightforward saving - you're not doing anything dramatically different apart from using the system to make call reviews faster.
Utilising the Assure element of ConversationRebiewAI to identify those bad calls means you can be much more effective with your QA resource and spend all your time looking at the bad calls. Now, you don't then take a 60% saving and reduce your QA team. You can spend time looking at some of the good calls and try to identify best practice - I know a lot of organisations do that. But this is the reason why, in real terms, you should be moving to a tech enabled QA process rather than sticking to the traditional random sample. There are costs to using the system, but they’re typically more than accounted for by the reduction in FTE and your running of a QA team.
Check back here for the final fifth part of the series ‘Futureproofing for Consumer Duty: Evaluating, evidencing and improving consumer outcomes’. Plus make sure to catch up on part one, two and three here.