Trust is a big concept. It forges relationships and progresses ideas, and is pretty much the foundation upon which societies are built.
And once trust is lost – either destroyed with one action or eroded over time – it can be a monumental struggle to get it back.
When it comes to artificial intelligence (AI), that trust wasn’t necessarily there to begin with. For as long as humans have conceived of artificial beings capable of thought, we’ve preoccupied ourselves with the ethics and dangers attached to them.
Science fiction, reality, and everything in between
Bring up AI to many people and they’ll still envision dystopian futures, human extinction and robot overlords. This, despite the fact that most of us rely on AI on a daily basis to live our comfortable modern lives: recommendations for what we watch on Netflix, the spam filter on our inboxes, Siri.
And then along comes a scandal like the Facebook Cambridge Analytica incident and we as a general public see our worst fears realised. Our data is being mined to influence us, true privacy is non-existent, the machines are taking over our lives etc etc. Any trust that had been building is wiped away and the conversation casting AI as a nefarious enemy resumes.
AI meets financial services
Let’s look at another industry with trust issues. The financial crisis has necessitated more than a decade of the financial sector fighting to regain public trust after a catastrophic loss of faith. In many ways this struggle for reconciliation has led to positive steps forward in responsibility, regulation and innovation. But there is still a long way to go.
In a recent speech delivered at The Alan Turing Institute’s AI ethics in the financial sector conference, the FCA’s Christopher Woolard considered the use of AI for consumer good. In it, he examined the important role artificial intelligence has to play in the future of the financial services sector, and what part the FCA will have in that future.
He hits on a very important point, that ‘technology relies on public trust and a willingness to use it’.
Technology for good
AI has the power to bring monumental positive change to financial services. Technology like ours at Recordsure facilitates a future where automation can make good customer care an inherent part of the process. Elsewhere AI is tackling financial crime in a way that hasn’t been possible before. Not to mention the everyday ways in which machine learning and technology improves the accessibility for customers interacting with their finances.
Given the chance, AI will be fundamental to restoring the public’s faith in the financial sector. If – that is – it can build up its own store of public trust.
And that, as usual, comes down not to the AI itself, but to us, the humans who create it in the first place. Trust comes from a belief that best intentions are being acted upon, and in this case the key will be whether these powerful AI tools are consistently used to add value for customers or employed to exploit them.
Solving this paradox
Popular uptake of something is a good indicator of credibility and trust. But the lack of wide adoption, which is typical of any new innovation, can have the opposite effect. So what do we do here?
Customers respond best to new tech and processes when it’s explained to them properly. That doesn’t mean overloading them with terms and conditions, which Woolard points out can be off-putting, but being honest and transparent. There’s some work to do to get the balance right but it’s a place to start.
Firms use more and more tech tools for all sorts of reasons. When customers, and their data, are involved, you need to articulate what that tech is doing and why it is ultimately in their best interest. If there’s no explanation of why it’s valuable to the customer, it’s time to reconsider whether that technology should be used.
In the case of Recordsure, we have seen a clear trend among end customers who had been offered the service by their banks. In most instances Recordsure was embraced, but the small minority of people who rejected the technology correlated strongly with the instances where the purpose or benefits had not been communicated.
What this shows us
When the interaction began with an open conversation explaining what the technology did, customers were on board. They understood that the technology was there to offer them transparency, give them confidence that they were being provided sound advice and assure them that in the unlikely event of any issues, the AI could identify the problem early.
The lesson here is clear: deploying new technology must be customer-focused. And for it to be successful, the benefits need to be clearly articulated in transparent, customer-centric and accessible way.
As firms make use of more AI, they need to keep customers in the loop. Only then will we see trust being built in both tech and financial services, preventing the valuable work of artificial intelligence from being undermined.