AI literacy becomes mandatory for life sciences across the EU
Published on 5th Feb 2025
The pharma and medtech sectors have a strategic window to implement AI literacy ahead of an August enforcement deadline
The EU Artificial Intelligence (AI) Act, since becoming law on 1 August 2024, has been progressively introducing a phased implementation of regulatory deadlines that have set the stage for transformation in life sciences.
One of the first main provisions under the AI Act became applicable from 2 February 2025. It requires providers, deployers and related individuals involved with the transformative technology to acquire specific AI understanding and knowledge.
Pharmaceutical and biotech firms deploying high-risk AI technologies in support of their medicines' lifecycle or in their interactions with the EMA are among those that will be affected by this requirement. Meanwhile, medtech companies will need to navigate the dual regulatory framework of AI and medical devices, while diagnostics businesses will also be affected. Key repercussions are also expected for those delivering or overseeing healthcare in Europe.
Phased implementation
Although the AI literacy obligations under the EU AI Act are now formally applicable, the governance provisions of the regulation will come into effect from 2 August 2025.
These establish Union-level governance principles and detail the process for designating national competent authorities, as well as how they will exercise their powers. This phased approach may ease the burden on companies that are not yet fully prepared to comply, as recently confirmed by the European AI Office established by the European Commission's decision 2024/1459 of 24 January 2024. Similarly, relevant penalties will also take effect from 2 August.
The interim period offers a strategic window for life sciences and healthcare enterprises to develop and deploy comprehensive AI literacy programmes. These initiatives should aim to cultivate a general awareness among staff about the AI Act's provisions and enhance literacy measures tailored to each employee's use, their organisational role and the risk level associated with the AI systems.
It is crucial to recognise, however, that enforcement provisions – covering market surveillance, control of AI systems in the EU and the introduction of national procedures for handling AI systems that pose a risk – along with provisions for legal remedies will only come into effect on 2 August 2026.
Lab to clinic literacy
The EU AI Act defines AI literacy as the skills, knowledge and understanding necessary for providers, deployers and affected persons to make informed decisions regarding AI systems' deployment. It should allow them to gain awareness about the opportunities of AI but also, importantly, about its risks and the possible harms it can cause.
This includes understanding the correct application of technical elements during the AI system’s development phase, measures to be applied during its use, and suitable ways to interpret the AI system’s output. In the context of the EI AI Act, this literacy aims to equip all relevant participants in the AI value chain with the insights required to ensure appropriate compliance and correct enforcement.
In the realm of life sciences, these literacy obligations are pivotal in ensuring that personnel, such as in-house medtech or pharma staff, possess the requisite skills to perform their AI-related duties both effectively and safely. This is particularly crucial for those engaged in the development, operation and utilisation of AI systems.
The EU AI Act extends these obligations to encompass other individuals, referred to as "affected persons". Although the regulation does not explicitly define who these individuals are, it is evident that these literacy requirements imposed by the regulation on life sciences deployers and providers must empower them with the essential knowledge to comprehend the impact of AI-assisted decisions on them. This group could encompass patients and end users of AI-driven technologies within the life sciences sector.
The integration of AI systems in healthcare is transformative, offering the potential to enhance patient outcomes and streamline caretakers' operations. This is especially critical for high-risk AI systems, including those utilised in medical diagnostics, treatment planning and patient monitoring. However, the AI literacy requirements outlined in the EU AI Act do not differentiate between high-risk AI systems and those deemed low- or medium-risk. These provisions are applicable to any deployer or provider of an AI system governed by the regulation, making their breadth extensive and far-reaching.
Comprehensive literacy strategies
Achieving AI literacy demands a comprehensive strategy – and the life sciences sector is no exception.
Education stands as a pivotal factor in ensuring that staff and other individuals involved in the operation and use of AI systems attain a sufficient level of AI literacy. This encompasses not only the personnel directly interacting with AI systems but also external contractors and service providers managing AI systems on behalf of a provider or deployer.
Training and guidance are anticipated to be fundamental to compliance for these operators. Various aspects of AI may come to play, such as understanding the mechanics of AI systems and recognising their associated opportunities and risks, knowing the boundaries of permissible use, handling confidential information and personal data responsibly, and respecting third-party rights.
In practice, life sciences businesses must determine which facets of AI literacy are most pertinent for their staff and relevant third parties, and how to ensure they effectively acquire this knowledge. This decision will hinge on the company's field of activity, the specific job requirements and the variety of AI systems utilised or marketed.
to ensure that individuals tasked with implementing instructions for use and human oversight possess the necessary competence, particularly an adequate level of AI literacy, training and authority to execute these tasks properly. This obligation extends not only to life sciences businesses across pharmaceuticals, biotech and medical devices but also to healthcare professionals and organisations using such AI in a professional capacity; for instance, to treat or diagnose patients or in biometric categorisation configurations deemed high risk by the AI Act.
Documentation will serve as an effective tool to record all measures taken to foster and champion AI literacy, providing evidence that the organisation has met its obligations under the AI Act. Voluntary codes of conduct, facilitated by the AI Office and Member States, are expected to offer further guidance on how companies can promote AI literacy, particularly for those involved in the development, operation, and use of AI.
Osborne Clarke comment
The EU AI Act's AI literacy obligations, effective from 2 February 2025, mark the first mandatory deadline for life sciences businesses to engage with the specifics of the EU AI Act and begin aligning with its extensive requirements.
The AI literacy provisions mandate that organisations ensure both internal staff and some external stakeholders possess the necessary skills, knowledge and understanding to make informed decisions about AI systems. The unofficial lead-up to the enforcement deadline of 2 August 2025 offers a strategic chance for pharma, diagnostic, medtech, and biotech enterprises to develop and implement comprehensive AI literacy programmes.
It is recommended to commence preparations now by offering training and guidance, documenting compliance measures, and considering voluntary codes of conduct to enhance AI literacy. By taking these steps, organisations in life sciences can mitigate risks, foster innovation and boost efficiency, ensuring they are well-prepared for one of the first requirements to come into force under the EU AI Act.