Mobility and Infrastructure

What are the legal considerations for defence to civilian AI use cases in the UK and European mobility and infrastructure sectors?

Published on 10th Jul 2024

The early use of AI in the defence industry is likely to read across into civilian mobility and infrastructure use cases

View from inside self driving car, overlooking traffic and onboard screen

Artificial intelligence (AI) is disrupting many areas of daily life and business. The mobility and infrastructure sectors are no exception. Substantial benefits are open to these sectors through the targeted deployment of AI. These include efficiency gains in construction and operation, enhancement of safety critical functions, and predictive maintenance that reduces cost, reduces gaps in availability and further improves safety. But, alongside the benefits, the implementation of AI brings novel legal considerations and potential risks which will require careful navigation.

From defence to civilian use 

The use of AI in defence has led to significant advancements in areas such as surveillance, cybersecurity, and autonomous systems. For example, the US Department of Defence issued its Responsible AI Development Toolkit in late 2023 and the UK's Defence Artificial Intelligence Centre published its Defence AI Playbook in January 2024. The UK playbook sets out various use cases, some of which we mention in the examples below. 

These advances and use cases have the potential to be used in civilian settings and are finding their way into civilian applications in the mobility and infrastructure sectors:

Autonomous vehicles 

In the defence sector, AI-driven autonomous vehicles are utilised for reconnaissance, combat and supply missions, with the intention of minimising human exposure to danger. This technology can be adapted to civilian use, particularly in the development of self-driving cars for urban transportation and logistics.

Smart infrastructure, predictive maintenance

AI-powered surveillance systems deployed in military installations for threat detection and monitoring can be repurposed for civilian infrastructure. For instance, integrating AI into urban infrastructure such as bridges and tunnels can enhance structural health monitoring and facilitate predictive maintenance. 

This presents a potential step change for critical systems, such as traffic management or signalling systems and utility networks, ensuring minimal downtime or disruption. 

Cybersecurity 

Defence applications heavily rely on AI for cybersecurity to detect and mitigate cyber threats in real time. AI-driven cybersecurity solutions can safeguard civilian infrastructure such as power grids and transportation networks from cyber attacks. 

Cloud and edge computing 

Powerful cloud or edge computing systems, typically hosting AI-software, are increasingly being utilised by government bodies and defence contractors. The focus is increasingly on deployment in military environments, potentially including active theatre. They provide connectivity and data flow securely and reliably in such environments. The technology is certainly capable of civilian use, for example in intercontinental cargo shipping or to improve infrastructure viability in hostile environments such as desert or polar geographies.

Legal considerations in civilian AI applications 

These applications in civilian mobility and infrastructure present both risks and opportunities, particularly under UK and EU law. Some key examples of these are in relation to data privacy and protection, liability and responsibility, intellectual property and regulatory compliance. 

Data privacy and protection 

The use of AI raises concerns about data privacy and protection. Compliance with the General Data Protection Regulation (GDPR) in the EU and the UK Data Protection Act 2018 is needed to mitigate legal risks and ensure user privacy. 

AI systems rely on vast data sets, for example the use of AI-enabled facial recognition in CCTV. This is already sparking regulator scrutiny and public concerns around privacy and security 

Liability and responsibility 

Determining liability in accidents involving AI-powered vehicles can raise complex legal questions. Clarifying chains of responsibility between manufacturers, transport operators, and AI systems providers requires comprehensive legal frameworks to address potential harm and allocate accountability. 

The UK's legislative programme, announced in November 2023, included an Automated Vehicles Bill that will implement recommendations to create a new regulatory framework for automated vehicles (AVs) in the UK. This will place accountability on authorised organisations that develop and operate AVs, and will remove liability for driving the vehicle from users. 

While the EU's AI Act does not directly apply to AVs, several of the AI Act's accountability requirements are to be introduced to AVs in the future through delegated acts of the Commission. 

Intellectual property rights 

As with all new technology, protecting IP rights relating to AI applications is crucial for fostering innovation given the high levels of investment needed to bring AI products to market. 

Regulatory compliance 

AI technologies in mobility and infrastructure must comply with existing regulations (which are sometimes technology-neutral) governing, for example, safety standards, environmental impact assessments, or public procurement procedures. 

By ensuring compliance with legislation, transparency and accountability will be promoted in AI's deployment. However, the adoption of AI in civilian uses also presents opportunities for legal innovation and collaboration. The underlying frameworks can be developed to establish clear guidelines for the deployment of AI in civilian contexts, fostering technological advancement while safeguarding public interest. 

Proactive engagement with regulatory authorities and stakeholders can facilitate the development of ethical AI standards, promoting responsible innovation in an era of digital transformation. 

The EU approach to AI regulation 

The EU AI Act enters into force in late 2025, following a two-year grace period during which companies can update their policies and practice to ensure compliance. The Act will allow for the free use of lower-level AI systems, but AI systems that fall into specific high-risk categories will need to be registered in an EU database and assessed for safety and compliance. The most stringent regulations are in place for very high risk AI systems (for example, those developed for military purposes, or those that exploit vulnerable groups or deploy potentially harmful manipulative techniques). 

Relevant examples of high-risk categories under the EU AI Act include:

  • Critical infrastructure – AI systems intended for safety components in the management and operation of critical infrastructure. Road traffic systems and the supply of utilities are considered high risk.
  • Safety components of products – where AI is used in safety critical elements. The use of such systems in rail infrastructure, lifts/elevators, or appliances burning fuels are classified as high risk. 

The high-risk category in the EU AI Act covers use cases where there is the potential for failures or malfunctions that put the lives or health of large numbers of people at risk. 

The UK's approach to AI regulation 

The UK has to date taken a markedly different approach to AI regulation. The white paper in March 2023 proposed five high-level principles that will be informally issued by the UK government to guide the approach of existing sectoral and other regulators. 

The five principles cover; (i) safety, security and robustness, (ii) appropriate transparency and explainability, (iii) fairness, (iv) accountability and governance, and (v) contestability and redress. 

The government had said it does not plan new legislation or powers, although it has indicated this may change where regulatory gaps are revealed. In January 2024 it published its intention to set a series of tests that would need to be met to pass new laws on AI. 

Among the tests that would trigger intervention would be AI companies failing to uphold voluntary commitments to avoid harm. Beyond potential legislative change, it is worth noting that a number of UK regulators are actively engaged in understanding how AI fits into their area of expertise. For example, the Competition and Markets Authority has recently reviewed the impact of large machine learning models on consumers and competition, and is developing a series of principles to guide the ongoing development and use of such AI models in order to help people, businesses and the economy fully benefit from the innovation and growth they offer. 

However, since then, the general election has been called. Our Insight looks at how AI policy may be expected to develop under the next government.

Osborne Clarke comment 

The transfer of defence-based AI to mobility and infrastructure, together with those sectors' own development of exciting new use cases presents both opportunities and challenges. 

The UK and the EU have the chance to leverage AI technologies to build smarter, safer and more sustainable systems for the future. While AI can contribute to innovation, greater efficiency, reduced costs and increase safety, it will also require robust regulatory frameworks to protect consumers, competition and ensure compliance with core legal obligations and safety standards.

Share

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?