Prohibited AI practices: Europe interprets the limits of innovation
Published on 25th Feb 2025
The European Commission has taken the lead and issued guidelines that clarify prohibited AI practices

The momentum that artificial intelligence (AI) has gained in recent years across all aspects of society has compelled the European Union to take a step forward in regulating its use and ensuring the protection of fundamental rights. The Regulation (EU) 2024/1689 on AI establishes a harmonised framework for the placing on the market, putting into service and use of AI systems within the Union. This AI Act has been effective since 1 August 2024, and classifies AI systems into four risk categories: unacceptable, high, limited and minimal or none.
In this context, on 4 February 2025, just two days after the provisions on prohibited AI practices came into full effect, the European Commission published its guidelines on prohibited AI practices. This document aims to clarify which uses and modalities of AI entail an unacceptable risk and are, therefore, strictly prohibited in the EU, thereby reinforcing the protection of fundamental rights, safety and the well-being of citizens.
A necessary interpretative framework
Although not legally binding, the European Commission's guidelines play a significant role in the consistent application of the AI Act across all Member States, detailing the essential elements that characterise each prohibited practice and offering practical examples and clarifications for their correct interpretation. Their purpose is to offer guidance to supervisory authorities as well as to developers and companies operating with AI systems in the European Union, clearly outlining which practices are not acceptable.
Prohibited AI practices and their interpretation in the guidelines
Some AI applications are categorically prohibited due to their potential to cause harm to individuals and society at large. The guidelines focus on AI practices considered to pose an unacceptable risk under the AI Act.
Subliminal, manipulative or deceptive techniques
The use of techniques that undermine individual autonomy by operating subliminally, manipulatively or deceptively, thereby compromising the ability to make informed decisions, is prohibited.
Their central aim is to distort people's behaviour, posing a significant risk of causing physical, psychological, financial, or economic harm. Intent to deceive is not required; it is sufficient that the system employs manipulative methods. An example would be the subvisual or subaudible display of messages, images, or sounds that influence behaviour without conscious perception.
However, lawful persuasion techniques that operate transparently and respect individual autonomy are excluded from the prohibition. This includes systems that offer personalised advertising recommendations based on user consent, as well as those that analyse customer emotions to improve service, provided they operate transparently.
Exploitation of vulnerabilities
Systems that exploit factors related to age, disability or the socioeconomic situation of individuals or groups, and that result in a significant distortion of their behaviour causing consequent harm, are prohibited.
The vulnerability must be directly related to these specific factors. A clear example of this would be systems that use misleading advertising to promote financial products to economically vulnerable groups, taking advantage of their precarious situation to induce harmful decisions.
Social scoring
AI systems that evaluate and score individuals based on their social behaviour and personal characteristics over time are prohibited if such scoring results in detrimental, unfavourable, unjustified or disproportionate treatment.
This prohibition applies regardless of whether the scoring is generated by the same organisation that uses it. An example would be a system that scores people based on their social media activity and uses that score to determine their eligibility for employment, housing or insurance.
Prediction of criminal offences
Systems that predict the likelihood of a person committing a crime based solely on profiles and personality traits, without verifiable objective data, are prohibited. This prohibition extends to private entities acting on behalf of law enforcement authorities or those required to ensure compliance with legal obligations related to crimes (for example, money laundering). An example of this type of prohibition would be algorithms that predict criminality based on ethnicity, nationality or family background.
Untargeted scraping of facial images
The untargeted scraping of facial images using AI, whether from the Internet or CCTV footage, to create or expand facial recognition databases is prohibited. This includes, for example, companies collecting images from social media to train facial recognition systems.
Emotion recognition in workplaces or educational institutions
AI systems aimed at detecting or inferring emotions and intentions in workplace and educational environments from biometric data, such as facial expressions, postures or micro-expressions, are prohibited. This includes systems that analyse "enthusiasm" in job interviews or monitor facial expressions to assess student engagement, but there are exceptions for medical or safety reasons, such as measuring driver fatigue or stress when using dangerous machinery.
Biometric categorisation
Biometric categorisation that infers sensitive characteristics such as ethnic origin, political beliefs, sexual orientation or religion, with the aim of categorising and potentially resulting in discriminatory treatment, is prohibited. A clear example of this prohibition would be AI systems that classify people based on their skin tone and associate these profiles with crime statistics.
Real-time remote biometric identification systems
Real-time remote biometric identification in publicly accessible spaces for law enforcement purposes is prohibited, except in specific cases such as searching for victims or preventing imminent threats. This prohibition does not apply to access control systems in restricted areas.
Osborne Clarke comment
Although the prohibitions on these practices recently came into effect on 2 February, the sanctioning regime will not be applicable until 2 August this year. The penalties provided by the AI Act are considerable, reaching up to 7% of a company's global turnover or €35 million, whichever is higher. Moreover, liability does not rest solely with providers; organisations using prohibited AI systems can also be sanctioned.
In this context, compliance with the AI Act will be a strategic element for any organisation that develops, supplies or uses AI systems. Adherence to this regulation not only mitigates significant legal and financial risks but can also determine the future of the company in an increasingly regulated market.