updated on 02 July 2019
QuestionAre UK lawyers prepared for the changes and new possibilities promised by artificial intelligence?
With the rise of the use of artificial intelligence (AI) across all businesses, the legal, ethical, regulatory and compliance risks associated with its development, use, and deployment will become critical. This is because AI is used in fields as diverse as algorithmic trading, robo advice, big data predictive analytics and industrial robotics and autonomous vehicles. It is therefore essential that businesses address the possible implications of the use of AI and that lawyers are able to advise on the issues it creates.
There are a number of unique and inherent properties which distinguish AI from other forms of technology and demand that the management of risk associated with AI’s use be treated differently:
AI learns through access and exposure to historic data. The quality of those data sets will determine the effectiveness, or otherwise, of the AI.
AI continues to learn from interactions with new data in live use and in real time. These interactions can include exposure to new structured or unstructured data sets gained from exposure to the environmental stimuli in which the AI operates.
AI can make decisions independently and also determine the basis upon which those decisions are made.
AI’s decision-making capabilities and outputs also continue to evolve and develop over time.
AI can be autonomous in the way that it learns, interacts with its environment, makes decisions and determines the basis upon which those decisions are made. It is this autonomy, coupled with the ubiquitous deployment and use of AI in so many regulated industries and markets, that gives rise to an equally unique set of ethical, legal and regulatory compliance risks.
The UK government’s industrial strategy, ‘the grand challenge’ identified AI and data as one of the four challenges that the UK currently faces. As such, the business, energy and industrial strategy’s (BEIS) stated ambition is “to put the UK at the forefront of the AI and data revolution”. The UK AI sector deal involves the injection of nearly £1 billion of private and public sector funding into the industry and focuses on five areas:
People – developing digital skills by investing in key skill training and retraining, as well as widening the scope to attract talent from abroad.
Infrastructure – creating new data sharing frameworks to address the barriers of sharing publicly and privately held data to allow for “fair and equitable data sharing between organisations”.
Ideas – boosting research and development (R&D) spending in the private sector to 2.4%of GDP by 2027 and rising to 3% in the longer term.
Business environment – the creation of a new AI council which brings together respected leaders from academia and industry, and the creation of a new government delivery body, the Office for Artificial Intelligence, as well as a new Centre for Data Ethics and Innovation.
Places – ensuring that businesses around the UK grow by using AI using local industrial strategies.
However, the legal landscape relating to the use and deployment of AI is uncertain and continually developing. Because of this lack of legal certainty, for those investing in or acquiring AI technologies or companies, a new approach to due diligence will need to take place to mitigate risk.
This approach will require lawyers to consider a number of questions:
How will AI impact different industry sectors? What are the barriers to its use and what efficiencies or cost benefits can be obtained through its deployment?
What is the ethical, legal, regulatory and compliance basis upon which AI makes decisions?
In regulated markets where AI is used, has ‘compliance by design’ been built into AI decision making? And is the decision making consistent, fair and transparent for those who are impacted by such decisions?
How can bias in decision making be minimised? AI is trained using data, but if there is inherent bias in that data or in the business processes or systems that AI is replicating, then it will be of no surprise that bias will be duplicated. Furthermore, how can AI be protected from on-going bias in live interactions?
How will liability be allocated and what are the types of loss that might be suffered when AI causes damage by operating outside of its parameters? Can liability for loss or damage caused by AI be insured against?
What role will AI have on the impact of a company’s intellectual property (IP) strategy? And who will be liable if AI infringes a third party’s IP?
How will AI impact a company’s workforce, supply chain and customers?
How will the use of AI impact the legal sector?
To operate in an efficient and secure manner, businesses and markets need the certainty of law and regulation. If Britain is to be a world leader in AI technology, Parliament and the courts will have to provide that certainty.
Mike Rebeiro is a senior adviser and the head of digital and innovation at Macfarlanes.