Responsible AI

We work to the highest AI principles to ensure safety, security, ethical responsibility and accountability

AI Transparency & Trust

The ReviewAI solutions are designed to maximise transparency in AI findings and to provide authoritative/trustworthy conclusions.

Fundamental to our product offerings is the concept of ‘human-in-the-loop’, in that no AI finding or conclusion can effect a review outcome without human supervision, except in the case (as noted below) of prescribed remediation projects.

  • Where AI is used to classify the type of a record or to date it, the record is presented to a human reviewer who has an opportunity to verify/correct such AI determinations should the record, its classification or date be considered pertinent to the outcome of their review
  • Where AI is used to identify relevant sections of records, to add a topic or to deduce a finding from within such record sections, the user interfaces of the product are designed to provide simple navigation to these sections, for human review and correction where necessary
  • Where AI is used to score records, for sampling/triage or for other purposes, the basis of any aggregated scoring is presented, whilst we encourage client review teams to both:
    • Verify the accuracy of any finding that may have weighting on the outcome of the case review that is scored by the AI as ‘risky’ (triaged-in records)
    • Sample a percentage of case reviews that are not scored by the AI as ‘risky’ (triaged-out records), again for verification purposes
  • For day-to-day file review/quality assurance use-cases, at no point does the product conclude cases as risky/unsuitable, or otherwise, based on AI conclusions alone, without presenting such cases to our clients’ review teams for accurate final judgment
  • For prescribed remediation use cases, where the Recordsure AI is sometimes used to conclude a final outcome for a proportion of cases without subsequent human review, the AI training methodology and human-verified test results over a statistically significant proportion of cases are presented to any appointed Skilled Person acting on behalf of a regulator, and typically also to an associated legal team, who must be satisfied that the AI provides an overall beneficial outcome to the customers, usually based on a combination of both high-accuracy and increased project speed

Also key to trust is the ‘robustness’ of AI. To this end:

  • Recordsure develops deterministic AI to ensure consistency of findings if given repeating inputs
  • All AI is thoroughly tested to ensure suitable outcomes, given expected or unexpected variances in input data

Managing AI Performance Bias

Recordsure trains all AI models to achieve a minimum of bias/discrimination.

Our rigourous test processes evaluate candidate models for accuracy, and where findings bias is introduced through imbalanced or scarce data, we seek to correct this through one or more of the following:

  • Supplementing scarce data classes with additional training data
  • Increasing/decreasing the weights of imbalanced training data
  • Where class data is overly scarce and bias cannot be corrected, the class is fully removed from our model and that specific AI intent/purpose is not provided as part of the solution. That is, it is better to not attempt to produce a result, than to have a potentially incorrect/biased result

Managing AI Drift

AI ‘models’ (i.e. the processing implementation of AI) encounter ‘drift’ over time, typically decreasing their performance.

This is not due to any changes in the models themselves but instead due to external events such as:

  • Changes in our clients’ business processes that change the way that they conduct conversations, style or write documents
  • Clients’ employee changes, M&A or other business changes to broker/adviser network structure that might change who is conversing or documenting customer interactions
  • Changes in our clients’ product offerings or changes to their suitability for various customer cohorts
  • World events – such as pandemics, war or economic events, inflation or interest rate changes –  that change the nature of conversations and that may influence customer buying behaviours or the suitability of products for customer groups
  • Regulatory evolution that changes the manner in which advised/unadvised products must be presented and supported
Recordsure advises clients to make allowance for regular re-evaluation of model performance and, where necessary, to update AI models in order to counter drift issues; and wherever it is acceptable to our clients, we contractually accommodate regular drift reviews over the licensed product term.

Safe AI

Recordsure AI Safety is ensured through the three principles of Transparency & Trust, Managing Performance Bias and Managing AI Drift, as above.

In addition:

  • The purposes of our AI based products are typically to assist in the verification of good customer outcomes, however, we do not engage in any AI utility that prescribes or suggests customer outcomes
  • Our products are designed to ensure that it is visible to users when they are interacting with AI or with information provided by AI
  • Through rigorous testing of AI we ensure that limitations based on its underlying technology and the data with which it is trained are understood; so that product or AI outcomes that might exceed such limitations are not produced
  • Our products are designed not to place undue influence on human expert decision making when verifying customer outcomes
  • Our products seek to incorporate human feedback loops, whereby human experts can challenge AI findings for continuous improvement of the AI

AI Security

Recordsure considers its own systems and data security, as well as that of our clients and their customers, of the utmost importance.

  • Recordsure is ISO27001 certified and all AI system management and data handling is performed in accordance with that certified Information Security Management System (ISMS)
  • Additionally, security contracts are enforced for each of our clients, so governing the use of AI systems and data handling for each client and their customer data
  • Recordsure does not mix ‘raw’ data (document or audio) for any of our clients with any other of our clients on any system, with all such raw data electronically separated, as per each client’s contract
  • Recordsure generates some AI models from composites of ‘derived data’ – that is, where raw data has been processed into some irreversible non human-comprehensible form, from which no client or customer PII could be recreated or deduced
  • In accordance with our ISMS, all open source or other 3rd party software used in AI/data processing are vetted for applicability; and against all related security concerns, such as for the inclusion of malware or for bugs that may allow for security breaches

Economic & Societal Impact of AI

The use-cases for Recordsure’s AI solutions are most typically for the efficient and accurate review of business-to-customer interactions, for process adherence or to ensure compliance in regulated activities.

We consider that the purposes of our AI are to:

  • Ensure fairer treatment and outcomes for customers
  • Improve corporate governance and controls
  • Save time and money for our corporate clients
We do not advocate the use of our AI for job-loss or skills reduction; indeed we promote that use of our AI allows file review personnel to spend more time applying their subject matter expertise to the qualitative aspects of their role, by reducing the time lost on routine information search and discovery.
Our products that use AI are designed to be inclusive, equally usable by people with varying accessibility needs.
Recordsure does not provide AI to companies that may engage in illegal or unethical practices.
Wherever possible, we take opportunities to present the responsible use of AI, such as that which we develop, through blog articles, at trade shows and one-to-one with our clients.

AI Governance, Control & Accountability

All AI is developed and trained in accordance with the Recordsure ISMS, including our SDLC (Software Development Lifecycle).

These practices require that all technology, AI included, is designed, developed and tested by those skilled in such practices.

Accountability for Responsible AI lies with the company board, advised by our senior leadership team, with the following roles having specific functions:

  • Chief Technology Officer: for the selection of AI tools and methods, their design, development and testing
  • Chief Information Security Officer: for the secure vetting and implementation of AI technologies and for the security of AI data governance
  • Chief Product Officer: for defining the AI utility for our clients and for ensuring that all AI delivered is fit for purpose

Contributing to the subject matter knowledge and expertise of the above functions are our Head of Science and team of data, language and speech scientists.

Our governance team is required to ensure up-to-date expertise across AI best practice and responsible use, via courses, through on-line learning and by attending trade and specialist events.

Internal Business Use of Online 3rd-party AI

Recordsure encourages internal staff to use online 3rd-party AI, wherever it is safe to do so, to enhance their own productivity in areas such as online research, developing copy or writing source code.

Guidance is provided on how to respect the authority and outcomes of common 3rd-party AI tools, and to ensure that if AI outcomes are circulated within or externally to the business, that they are correctly identified as such.

It is strictly prohibited to provide Recordsure trade secrets, our clients’ trade secrets or our clients’ data to any such 3rd party AI, as much as with any online site, as dictated by our Internet & social media use policy.

See it in action

Book a demo with us to experience the
power of ReviewAI in action.