Artificial intelligence versus regulation: friends or foes?
Want to read this article later?
Just tap MyLCN+ to save it to your account
How can regulators protect consumers without stifling the development of automated technologies?
As we stand on the threshold of the fourth industrial revolution, the landscape ahead includes developments in areas such as blockchain, the Internet of things and nanotechnology: developments that are taking place at a faster rate than many of us are able to keep up with – or even easily comprehend.
Artificial intelligence (AI) is one of these very interesting areas of development that could revolutionise our day-to-day lives, much as the Internet did in the last century.
The term ‘AI’ is often used to refer to the development of computers, computer programs or applications that can, without direct human intervention, replicate processes which are traditionally considered to require a degree of intelligence, such as strategy, reasoning, decision making, problem solving and deduction processes. For example, an AI program can use algorithms to analyse datasets, and make decisions and take actions based on the output of the analysis – an analysis that would traditionally be done by a human. AI programs can also be developed to interact with people in ways that mimic natural human interaction, for example, in online customer service support – sometimes to an extent that the difference is hard to recognise (the 'uncanny valley').
Potentially AI has the potential to supplant a great number of human processes, and it can do so cheaper, faster and without human error. However, in practice the current applications and opportunities are much more limited and constrained by practical factors such as the sheer processing power that is required, especially pending a breakthrough in quantum computing, and 'design' limitations such as the inability to learn by extrapolating from limited failures, or to apply common sense to scenarios.
Is this development a good thing? AI can cut costs, eliminate human error, and potentially make products and services available to those who might not otherwise be able to access them. But what about the possible downsides?
Fifty years ago, in the film “2001: A Space Odyssey”, an AI slowly turns from being the humans’ assistant to pitting itself against them. HAL, the Heuristically programmed Algorithmic computer, 'realises' the fallibility of humans stands in the way of it achieving its operational objectives, and therefore seeks to remove these obstacles. Presciently, this film encapsulated many of the present concerns about AI – what will stop the machines 'deciding' to exercise the powers they are given in a way that we don't like? For example, what is our recourse if we need a computer to evaluate a request from us, such as deciding whether or not to accept a job application, and the computer says no? We can try to appeal to other humans on an emotional level, or challenge the basis for their decision; a computer program that is implacably based on an incomprehensible algorithm does not present that option.
Regulation is the most frequent knee-jerk response to any such question of “what if…”. However, many regulators are cautious about imposing regulation in a vacuum, seeking to prescribe or proscribe technologies rather than focusing on particular applications of technologies. The well-known risk of doing otherwise is the outcome that technology will develop so quickly that regulation will always lag behind.
In the financial services space, AI has already been making inroads on market practices, as evidenced by the following examples.
- Behavioural premium pricing: insurance companies have been deploying algorithms to, for example, price motor insurance policies based on data gathered about the prospective policyholder's driving habits.
- Automated decision making: credit card companies can decide whether to grant a credit card application based on data gathered about the applicant's spending habits and credit history, as well as age and postcode.
- Robo-advice: a number of firms have developed offerings that can provide financial advice to consumers without the need for direct human interface, based on data input by the customer regarding means, wants and needs, and measured against product models and performance data to find appropriate investments.
Automating these processes with AI offers the ability to manage downwards the costs of servicing a given market while potentially eliminating rogue variables caused by human fallibility. AI could thereby help to make financial services products more accessible to the public, enabling them to be offered at a price that is affordable to a greater section of the public.
However, we cannot forget potential risks: what if an insurance pricing algorithm becomes so keenly aligned to risk that a segment of higher-risk – and potentially vulnerable – customers are effectively priced out of the market? How can an algorithm be held accountable if a customer feels that a decision about their credit card application was wrong? And what if the questions about investment intentions are too focused on what customers say they want, and miss out on the nuances of a customer's wishes and fears that an experienced human advisor may know to pick up on and pursue?
What could the regulators do to address these potential risks, and would there be a detriment to consumers if they materialised?
One option, and likely only part of any solution, is to ensure firms are mindful of the consumer and market protection outcomes and objectives at the root of the regulations with which they must comply, and that they will be held accountable when their products and services fail to deliver those outcomes. For example, the UK's Financial Conduct Authority (FCA) requires firms providing services to consumers to ensure that they are treating their customers fairly, and being clear, fair and not misleading. The onus is then on firms to ensure that whatever new developments they have, these outcomes are consistently being achieved. For the insurance firm described above, this could involve paying close attention to the parameters and design of the algorithm, to ensure that, for example, a certain pricing threshold is not breached. For the credit card firm, this could be ensuring that if a customer's application is declined, they are provided with information about how that decision was reached, and what factors it was based upon. For the robo-adviser proposition, this could involve a periodic review of investments and portfolios by a human adviser.
Practically, regulators will need to work with firms to ensure that the need to comply with such outcomes does not block development. Since 2016, the FCA has made available a regulatory 'sandbox' for firms, to let them develop new ideas in a 'safe' surrounding, to contain risks of customer detriment while products are in development, and to offer support in identifying appropriate consumer protection safeguards that may be built into new products and services. The FCA is now exploring the expansion of this sandbox to a global staging: working with other regulators around the world to support firms that may offer their products in more than one regulatory jurisdiction. The FCA has also been meeting with organisations who are working to expand the current boundaries and applications, at specialist events around the UK, such as the FinTech North 2018 series of conferences, which raise the profile of fintech capability in the North of England.
By working together to balance potentially competing factors such as technological development and consumer protection, regulators and the industry may be able to provide a stable platform to develop AI, while overcoming or at least assuaging the potential fears of the target audience for these developments. In “2001: A Space Odyssey”, the conflict between AI and humans was only resolved by the 'death' of the AI. Let's hope that in real life, a way of co-existence can be found instead.
Roseyna Jahangir is an associate in the financial services team at Womble Bond Dickinson.