Life Sciences and Healthcare

European Medicines Agency issues guidance on AI use in pharma regulatory science

Published on 7th Oct 2024

The drug agency's new guidance focuses on regulatory authorities but holds relevance for the industry

EU regulation

The European Medicines Agency (EMA) and the Heads of Medicines Agencies have unveiled guiding principles on leveraging large language models (LLMs) within regulatory science and pharmaceutical oversight. These new guidelines, while directed at regulatory authorities, bear implications for pharmaceutical firms and enterprises using or supplying AI systems used in pharmaceutical regulatory affairs.

Reliance on the EU AI Act 

The new guidelines rely on definitions and concepts from the EU Artificial Intelligence (AI) Act, such as AI systems and General-Purpose AI, despite making only one explicit reference to the legislation. This reliance underscores the importance of understanding the broader regulatory framework governing AI technologies in the EU.

Understanding LLMs

In the absence of a definition within the EU AI Act, the EMA has established its own description of LLMs. These models fall under the category of generative AI, which is trained on extensive text datasets and can generate natural language responses to a variety of inputs or prompts.
LLMs undergo two main training phases: firstly, pre-training, where they learn statistical correlations between text tokens and, secondly, fine-tuning, where they are refined for specific tasks using labelled examples. 

Common applications of LLMs include chatbots, such as AI programmes that engage in human-like conversations; another example is information processing automation, which include tools that can sift through vast amounts of medical literature to extract relevant information. LLMs also prove invaluable in AI assistants, with solutions that can help drafting submissions, compiling reports and managing document workflows.

LLMs in pharma reg 

LLMs have the potential to support significantly regulatory tasks and processes. They can query extensive documentation, automate knowledge and data mining, and assist in everyday administrative tasks. However, their use is not without challenges. Variability in results, potential "hallucinations" (that is, inaccurate responses, as defined in the EMA guidance) and data security risks are among the potential risks.

In pharmaceutical regulatory science, practical examples of LLM uses abound. LLMs typically add value by swiftly reviewing and summarising vast volumes of regulatory documents, including clinical trial reports and safety data, thereby expediting decision-making processes. They may be used to tackle complex regulatory queries, efficiently examining extensive databases to provide pertinent information. Routine report generation, such as pharmacovigilance updates, becomes more seamless as LLMs have the potential to adeptly extract and compile data from various sources. Accurate translations of regulatory documents are another application, ensuring compliance with regulatory standards.

Ethical AI and organisational principles

The EMA emphasises the need for safe and responsible use of LLMs. It outlines several recommendations for regulatory authorities, including ensuring the safe input of data, applying critical thinking, continuous learning, and consulting appropriate authorities when concerns arise. Deployers  should understand how LLMs operate and carefully craft prompts to avoid exposing sensitive data. They should critically evaluate LLM outputs for accuracy, reliability, fairness, and legality. Continuous education on effective LLM use is essential to maximise benefits and minimise risks. Users should also know whom to consult when facing concerns and report any issues promptly.

The guidance also provides organisational principles for regulatory authorities to help users navigate LLM risks and maximise their value. These include defining governance frameworks, providing training, and sharing experiences across the network. Establishing governance on the use of LLMs can help users navigate the risks of these models by providing awareness and guardrails on their use. 

Governance may also consider whether mandatory training on safe and responsible use of LLMs is required and how to collect information on errors and issues arising from LLMs. Another crucial aspect of governance relates to the development of these models, not just their use. While some agencies might not use their own data to fine-tune or improve LLM performance on a specific task, those that do should consider the data protection and information security needs within their governance plans.

Industry impact

The EMA's guidance offers an opportunity for companies in the pharmaceutical, medtech and digital health sectors to take note of how regulatory bodies may integrate LLMs into their processes. 

Understanding these technologies can help businesses better prepare for future interactions with regulators and align with evolving standards. The guidance's emphasis on ethical considerations – such as fairness, transparency and accountability – mirrors broader industry trends. Companies are encouraged to ensure their AI systems adhere to these principles to foster trust with both regulators and the public. Additionally, the fast-evolving nature of AI technologies underscores the importance of continuous learning and adaptation. Investing in staff training on the safe and effective use of AI and staying updated on regulatory developments can provide a strategic advantage. 

These principles follow the main rationale of the AI Act provisions regarding high-risk AI systems , although there are differences in terms of scope and reach.

Osborne Clarke comment

The EMA's guidance on the use of LLMs in regulatory science underscores the transformative potential of AI technologies while highlighting the need for responsible use. Although the guidance targets regulatory authorities, its principles are highly relevant for companies in the pharma, medtech and digital health sectors.

Navigating the evolving landscape of AI in regulatory science requires a nuanced approach. Staying informed about regulatory developments and understanding their potential impact can provide a strategic advantage. Aligning with the AI Act's obligations on AI literacy (which will become applicable in February 2025, along with other provisions ), training staff to use AI technologies safely and effectively can enhance operational efficiency and compliance. Embracing ethical AI principles, such as fairness, transparency and accountability, can further foster trust with regulators and the public. Adopting robust data protection measures ensures that sensitive information is safeguarded, aligning with the broader regulatory framework.

By integrating these considerations into their operations, companies can not only harness the benefits of AI technologies but also mitigate risks and build a foundation of trust with regulators and stakeholders across the healthcare and life sciences industry.

This Insight series on the EU AI Act explores its wide-ranging significance for businesses active across bio-pharmaceuticals, medical devices and in vitro diagnostics. Coverage will include AI supply chains, product logistics, research and development, SMEs, compliance monitoring, liability, and more. The next in the series will focus on the use of AI in the medicinal product lifecycle.


 

Share

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?