Robotics at a global regulatory crossroads: compliance challenges for autonomous systems
Published on 3rd April 2025
Developers and manufacturers that embrace regulatory change and compliance can enhance efficiency and competitiveness

The artificial intelligence (AI) revolution is facing a regulatory tidal wave. AI systems are increasingly integrated into physical hardware, while autonomous robots create opportunities to support much-needed increases in productivity at scale and across industrial sectors. With huge investment in these technologies driving growth, the regulatory landscape is shifting rapidly and increasingly complex legislation is coming into effect.
As the EU overhauls legislation and the UK moves to align with digital regulation frameworks, the challenge for businesses driving innovation in this area is to leverage the potential for robotics while meeting new and extensive compliance obligations across overlapping regulatory regimes.
Manufacturers must contend with an emerging triad of reforms to machinery regulations, expanded product liability rules and an overarching AI governance framework. Compliance with the legal changes while developing AI-embodied robots and devices is a significant challenge for manufacturers. But successful compliance can offer significant strategic advantages.
Understanding the regulatory revamp
The EU Machinery Directive 2006 established baseline safety requirements for industrial products, prioritising physical safeguards like emergency stop mechanisms and maintenance.
Although the framework has been effective for traditional machinery, it was not drafted to account for systems that learn from their environments. Surgical robots adapting to tissue density variations or a warehouse robots optimising routes in real time are well beyond the directive's scope.
New machinery regulation
From January 2027, Regulation 2023/1230 introduces three pivotal requirements for robotics manufacturing: autonomy thresholds, lifetime cybersecurity responsibilities and collaborative risk mapping.
The introduction of autonomy thresholds will mean that machines demonstrating "self-evolving behaviour through experience" face enhanced conformity assessment requirements. Documented safety proofs will be required for future operational states and not just for current capabilities.
Lifetime cybersecurity measures in the regulation will require network-connected robots must demonstrate resilience against both physical tampering and digital intrusions throughout their lifecycle, including post-sale software updates.
Collaborative risk mapping will be needed for robots sharing workspaces with humans and will have to account for human-machine interactions, which will require dynamic risk mapping, real-time hazard monitoring and responsiveness.
Product liability expansion
The EU's new Product Liability Directive, which came into force in December 2024, will increase potential civil liability for autonomous robotics.
For "software as a product", under the revised directive, machine-learning models and other AI systems can face standalone liability claims for defectiveness, without the need for a fault in physical hardware.
It also revises rules around the presumption of a defect. AI systems thought to have caused harm will in some circumstances be presumed to be defective unless manufacturers can prove that the system behaviour was safe or unrelated to the harm.
And there are implications for supply chain accountability. Members of a supply chain for a robot or an AI system can take on shared responsibility for defects, meaning that liability might not sit exclusively with an original equipment manufacturer but also with data annotators or algorithm trainers.
AI Act: risk-based governance
The EU's tiered AI regulatory framework creates divergent compliance obligations for different use cases.
High-risk applications, like surgical robots or drones, will require conformity assessments, human oversight protocols and granular data governance.
Limited risk applications, like inventory management systems, may only need to provide transparency to users on their decision-making processes.
The highest risk applications are already prohibited in the EU, such as workplace emotion recognition systems, which might have been used in customer service or HR-focused robots.
Other legislative changes
Alongside these reforms, there have been other legislative changes. The new General Product Safety Regulation has been effective in the EU since December 2024. It expands safety obligations for consumer-facing products, including AI systems and robots. The regulation imposes ongoing and overlapping safety and cybersecurity responsibilities, as well as traceability obligations and responsibilities across supply chains.
The UK Product Security and Telecommunications Infrastructure Act and the EU Cyber Resilience Act also impose independent cybersecurity obligations on connected devices of all kinds.
For the EU, the Ecodesign for Sustainable Products Regulation creates a framework for the introduction of design obligations on machinery, mandating repairability and environmental considerations.
Compliance pathways
Proactive compliance is essential when bringing autonomous hardware to market. There are a range of compliance pathways and strategies that businesses can action to innovate safely, reduce risk and ensure that they keep on top of their obligations.
Businesses can build "living" safety documentation and ensure appropriate oversight. Traditional static documentation fails in a world of adaptive systems. Businesses may consider establishing cross-functional AI ethics and oversight boards to develop policies and processes that evolve alongside system capabilities.
Audit trails documenting choices and risk mitigations enable responsive updates as industry standards and requirements mature. Businesses will also need to embed governance checkpoints into R&D "sprints", rather than consigning these to post-development processes.
For supply chain management, suppliers and vendors should be required to maintain "explainability repositories" for black-box AI components, and provide real-time update notifications. Contractual liability caps and contracts as a whole need to reflect software iteration cycles – a robot OS that receives updates daily or monthly will need more frequent contractual reviews than hardware that might only be refreshed every few years.
Sandbox testing and lifecycle forecasting can offer controlled testing environments and simulations that allow for safe exploration of "edge" cases while generating evidence of product conformity. This will help rebut presumptions of defectiveness and streamline conformity assessments. Crucially, planning product lifecycles also allows for early identification of potential regulatory exposure.
Modular compliance by design addresses potential regulatory deviations across jurisdictions. Businesses should consider using modular systems and components for regional adaptation and standardised documentation outputs for easy regulatory reporting or disclosure.
In the future, embedded monitoring tools will be critical for managing risk and potential liability, particularly when rebutting assumed defects under the Product Liability Directive.
Osborne Clarke comment
Compliance can enhance strategic advantages. Robotics developers and other manufacturers can build industry trust and enhance operational efficiency by integrating compliance into the development cycle for new products. By embracing regulatory change as opportunity rather than an obstacle, they can turn compliance from a cost centre to a differentiator and competitive advantage.
Proactive compliance not only mitigates risk but also positions businesses to be able to influence emerging standards for technology and drive innovation.