Back to overview

LCN Says

The law, AI and the defence industry

updated on 14 November 2023

Reading time: seven minutes

A law career can take you into a variety of sectors. Four years ago, I moved into the complex, everchanging world of defence technology and the transition to this world of futuristic concepts has been an eye opener. The defence industry is about so much more than bullets and guns. It’s a sector that’s always been at the forefront of technological innovation, including medicines, mobile phones, the Internet and now AI. AI has the potential to revolutionise military operations, from enhancing the decision-making process, to automating routine tasks; as in other industries, AI could take over jobs considered too “dull, dirty or dangerous” for people.

But there’s unease about AI in defence. The rise in the use of autonomous/unmanned vehicles (commonly known as drones) in both military and commercial settings, has been accompanied by heated debate as to how legislation is going to keep up with such rapid advances in technology, and the ethical implications of their use in a military setting.

AI and autonomy

The defence industry refers to ‘AI’ as “a family of general-purpose technologies, which may enable machines to perform tasks normally requiring human or biological intelligence”. However, there’s no legal or statutory definition of AI. Instead, it’s often defined by reference to the combination of two key characteristics:

  • adaptivity – being ‘trained’ and operating via inferring patterns and connections in data that aren’t easily discernible to humans; and
  • autonomy – making decisions without the express intent or ongoing control of a human.

Let’s expand on the latter for a moment.

The UK government had previously stated that it doesn’t possess fully autonomous weapons and had no plans to develop them. However, during a parliamentary debate in the House of Lords in 2021, it became apparent that the UK's new posture doesn’t rule out the possibility that “the UK may consider it appropriate in certain contexts, to deploy a system that can use lethal force without a human in the loop”. 

While the UK does adhere to the International Humanitarian Law principles, which prohibit indiscriminate or unnecessary use of force, there’s currently no comprehensive legislation that specifically governs the use of AI and autonomous systems in the UK. There’s industry-specific guidance, such as the UK Ministry of Defence ‘Pro-innovation approach to AI regulation 2023’, and a handful of more generic regulatory policies, such as the General Data Protection Regulation (GDPR), which addresses concerns around data privacy and security that arise from the use of AI systems. But essentially issues are dealt with on a case-by-case, and sector-by-sector basis. This lack of clarity presents several key challenges for legal professionals, including:

  • safety and security;
  • privacy;
  • bias and discrimination; and
  • responsibility and accountability.

Pinning down accountability

Who should be held accountable when/if something goes wrong with AI technology? The developer? The owner of the technology? The end user?

Unfortunately, the nature of most modern AI systems (layers of interconnected nodes that are designed to process and transform data in a hierarchical manner) prevents anyone from establishing the exact reasoning behind a system’s predictions or decisions. This makes it almost impossible to assign legal responsibility, in the event of an accident or errors caused by the system.

There’s the risk that delegation of tasks or decisions to AI systems could lead to a ‘responsibility gap’ between systems that make decisions or recommendations, and the human operators responsible for them. I’ve found that often the military response regarding who’s to be held accountable in these scenarios, is that it’ll always fall on the commanding officer. But attributing blame up or down the chain doesn’t simplify this legal and moral complexity, and in the absence of clear legislation it’s difficult to hold organisations responsible for the actions of their AI systems. Crimes may go unpunished, and we may even find that eventually the entire structure of the laws of war, along with their deterrent value, will be significantly weakened if law makers can’t come to an agreement on some kind of universal legislation.

Bias and discrimination

Although the media might have you believing otherwise, we’re nowhere near a world where AI is thinking and making decisions completely of its own accord. The reality is, that AI systems are only as good as the data they’re trained on and, while machine learning offers the ability to create incredibly powerful AI tools, it’s not immune to bad data or human tampering – whether that’s flawed or incomplete training data, limitations with technology or simply usage by bad actors. It’s all too easy to embed unconscious biases into decision making, and without legislation addressing how these biases can be mitigated or avoided, there’s a risk that AI systems will perpetuate discrimination or unequal treatment.

To try to alleviate these issues, industry experts have been considering the possibility of an ‘ethics by design’ approach to developing AI. Could legal responsibility be moved up the chain to the developer? Should there be rules of development as well as rules of engagement?

Of course, this approach would bring with it yet more new obstacles for the legal profession. Although one would hope that in comparison to the previous complexities of establishing causal links where AI is involved, these obstacles should be much easier to navigate. With proper universal regulations and ethical principles in place for tech companies to follow when developing new systems, the path of causation should be significantly more straightforward, allowing lawyers to establish clear accountability.

Where do we go from here?

In 2021, the European Commission proposed the first ever legal framework on AI, which addresses the risks of AI. The proposed regulation aims to establish harmonised rules for the development, deployment and use of AI systems in the European Union, and outlines a legal framework that proposes a risk-based approach that separates AI into four categories: unacceptable risk, high risk, limited risk and minimal risk. Each category is subject to different levels of regulatory scrutiny and compliance requirements.

This innovative new framework led to the 2022 proposal for an ‘AI Liability Directive’, which aims to address the specific difficulties of legal proof and accountability linked to AI. Although at this stage the directive is no more than a concept, it offers a glimmer of hope to legal professionals and victims of AI-induced harm by introducing two primary safeguards:

  • Presumption of causality: if a victim can show someone was at fault for not complying with relevant obligations and that there’s a likely causal link with the AI's performance then the court can presume that this non-compliance caused the damage.
  • Access to relevant evidence: allows victims of AI-related damage to request the court to disclose information about high-risk AI systems. This should help in identifying the person/persons that may be held liable and potentially provide insight into what went wrong.

While one might argue that this new conceptual legislation wouldn’t solve all our legal issues, it’s certainly a step in the right direction.

In addition, there are policy papers such as the UK 2022 Artificial Intelligence (AI) Strategy, and the US Department of Defence Responsible Artificial Intelligence Strategy and Implementation Pathway 2022.

These provide important guidance to both tech developers and their military end users, on adhering to international law and upholding ethical principles in the development and use of AI technology across defence. They also present opportunities for data scientists, engineers and manufacturers to consider using ethics by design approaches when creating new AI technology. Aligning the development, with the related legal and regulatory frameworks, to ensure that AI and autonomous systems are developed and deployed in defence in a manner that’s safe, effective, and consistent with legal and ethical standards.

Yasmin Underwood is a defence consultant at Araby Consulting and member of the National Association of Licensed Paralegals (NALP), a non-profit membership body and the only paralegal body that is recognised as an awarding organisation by Ofqual (the regulator of qualifications in England). Through its centres around the country, accredited and recognised professional paralegal qualifications are offered for those looking for a career as a paralegal professional.

Web: http://www.nationalparalegals.co.uk

X: @NALP_UK

Facebook: https://www.facebook.com/NationalAssocationsofLicensedParalegals/

LinkedIn: https://www.linkedin.com/company/national-association-of-licensed-paralegals/