European Commission proposes new regulatory framework for artificial intelligence
Published on 23rd Apr 2021
Draft legislation will create a pyramid of risk-based regulation for artificial intelligence in the EU, including heavy fines for non-compliance
After extensive consultation, the European Commission has unveiled its proposed regulatory regime for artificial intelligence (AI). The legislation envisages a full regulatory framework, including new EU and national bodies with strong enforcement powers, and heavy fines on businesses for non-compliance. It is shaped around the level of risk created by different applications of AI and so will impact differently on different developers and providers of AI.
Launching the proposals, the EU Commissioners explained their fear that without regulation, EU businesses might not feel confident to adopt AI. The new legislation is intended therefore to drive an ecosystem of trustworthy AI for EU citizens and organisations to ensure that the wide benefits of AI are realised. This objective trustworthiness, however, creates a new layer of regulatory risk and the additional financial and organisational burden of compliance.
The new legislation essentially applies to any AI systems (or AI outputs) used in the EU. Military applications are excluded. The draft provisions are likely to be subject to extensive lobbying as they make their way through the legislative process, so we would not expect this to become law before 2023, probably with a further 18 to 24 months before it is fully in force (see "What happens next?" below).
Four levels of AI risk are identified, with the regulatory approach adapted accordingly.
1. Prohibited AI systems
As was widely anticipated, the Commission has mostly outlawed the use of real-time automated facial-recognition systems in publicly accessible places by public authorities for the purposes of law enforcement. Exceptions to the ban include a targeted search for a specific potential victim of crime, preventing a specific, substantial and imminent threat to life or a terrorist attack, and certain situations where there is a search for a criminal suspect or perpetrator. These systems can only be used subject to safeguards, limitations (including time and geography of use), the requirements of proportionality and necessity, and (usually) with prior judicial authorisation.
AI systems are also banned where they use "subliminal techniques beyond a person's consciousness", or that seek to exploit people's vulnerabilities due to age, physical or mental disability, in either case with a view to distorting their behaviour in a way that could cause physical or psychological harm.
Finally, "social scoring" systems used by public authorities are prohibited where they evaluate people's trustworthiness based on their social behaviour or their personal or personality characteristics, with a detrimental effect on how that person is treated in a context unrelated to the context in which the data was gathered, or in a way which is unjustified or disproportionate to the social behaviour or its gravity.
The facial recognition system prohibition system is the clearest and most developed of these prohibitions. Greater clarity and explanation will be needed around the other bans, not least because breach will be subject to fines of up to the higher of €30 million or 6% of worldwide turnover – higher than the maximum the General Data Protection Regulation (GDPR) fines.
2. 'High risk' AI systems
The next level of regulation is "high risk" AI, the main focus of the Commission's proposals. AI applications are considered to be high risk where there is potential for harm to health and safety or for an adverse impact on fundamental rights. They comprise:
- AI safety systems or other AI used in a wide range of products where harmonised EU safety requirements apply or where a requirement for a third party conformity assessment is already in place, including rail, road and air transportation and other vehicles or machinery; and
- various specified AI applications, including: biometric identification systems; critical infrastructure systems; systems used in appraisals for education or vocational training; systems used for recruitment; performance appraisal or task allocation in the workplace; systems used for assessing eligibility for public benefits and services; creditworthiness checks; despatch or prioritisation of emergency services; and various systems used in law enforcement, immigration and asylum processes or the administration of justice and democracy. The Commission will be empowered to amend this list.
High-risk AI systems must meet six areas of compliance covering data and various aspects of transparency, safety and control.
Data and data governance
The draft regulation sets out extensive obligations in relation to the data sets used to train up, validate and test high-risk machine learning systems and the governance around them. Datasets must be subject to strong governance, including review and scrutiny of design, collection, curation, suitability, possible bias, gaps and shortcomings. They must be "relevant, representative, free of errors and complete". They must take into account the characteristics or elements that are particular to the geographical, behavioural or functional context in which they will be applied.
These are significant provisions. Not only are they rigorous and broad, but this is the second area where the highest level of fines can be imposed of up to 6% of worldwide turnover – higher than the maximum GDPR fines for non-compliant personal data practices. The provisions apply to any category of data used for training AI models, not just personal data.
There is a clear recognition in these provisions of the importance of data in machine learning and deep learning. The focus is on making sure that proper care and attention is taken in relation to selecting datasets in order to address the risk of "garbage in, garbage out" – or more generally of poor, biased or non-representative data generating poor, biased or non-representative outputs in applications with a high risk of causing physical harm or damage to fundamental rights.
Transparency, safety and control
In addition to the data provisions, high-risk AI must also meet mandatory requirements in relation to technical documentation, record-keeping, wider transparency issues, human oversight, plus accuracy, robustness, and cybersecurity.
The challenge around these provisions is that transparency about how a system operates can be difficult to achieve with some forms of AI. The "black box" problem of not knowing why a particular output has been generated is inherent in the way in which machine learning systems function. They are set up to self-adjust and self-calibrate on an ongoing basis with each piece of data that is passed through them. They use a maths-based process, fundamentally different to human logic and reasoning and therefore not simple to explain.
In addition to transparency, the regulation requires high-risk AI systems to be subject to human oversight. A human should be able to understand the system and to monitor for anomalies, dysfunction or unexpected performance, as well as being able to disregard, reverse or override the outputs and to operate a "stop" button where needed. Again, implementing these ideas into machine learning systems can be difficult.
Conformity and post-market measures
The regulation sets up a system of conformity assessments and certification for high-risk AI systems, that appear to be inspired by product regulation regimes. The majority of providers will, however, be able to undertake a self-assessment, unless the product already requires third party conformity assessment or where the AI system is a remote biometric identification system (which is not banned). Where an AI is already within scope of a conformity assessment regime, review of compliance with the AI regulation will be rolled into the wider assessment process.
AI systems in conformity with the regulation will have the CE mark applied, to demonstrate their compliance. Each such AI system must be registered on an EU database held by the Commission and open to the public.
The regulation also sets up post-market monitoring and fault-reporting obligations for providers of certified high-risk AI systems, as well as market surveillance responsibilities for national authorities.
3. Transparency for lower-risk systems
The third level of regulation concerns lower-risk systems where the primary concern is that people know that AI is being used. The obligation is essentially one of transparency – the human must know, unless it is obvious from the context, that they are engaging with the output of an AI system.
This obligation applies to:
- systems that interact with natural persons (such as chatbots);
- the use of emotion recognition systems or biometric categorisation systems;
- images, video or audio that have been created or manipulated by AI, including deep fakes.
There are exceptions for certain law enforcement situations and, for the third category, for artistic freedoms.
4. Codes of conduct
For AI systems that do not fall within any of the above categories of formal regulation, the proposals envisage codes of conduct to encourage voluntary adherence by lower-risk AI system providers to the regime around high risk AI systems. Codes of conduct can include wider issues such as sustainability, accessibility, stakeholder participation in the development of AI systems and diversity in design teams.
Enforcement and fines
A new AI Council will be set up at EU level to provide expert advice to the Commission. Enforcement will be at national level, with Member States designating or creating national competent bodies. The Commission will co-ordinate EU-wide issues, including extending a national ban of a harmful AI system across the EU, where appropriate.
As noted, breach of the prohibitions or of the data provisions for high-risk AI will be subject to heavy fines of a maximum of the higher of 6% of worldwide turnover or €30 million. Breach of any other substantive provision of the regulation will be subject to fines of up to 4% of worldwide turnover or €20 million. Providing incorrect, incomplete or misleading information to conformity or enforcement bodies will be subject to fines of up to 2% of worldwide turnover or €10 million. Fines will be imposed at national level.
What happens next?
The EU Commissioners responsible for the measure, Margrethe Vestager and Thierry Breton, have both indicated a political desire on the part of the Commission to expedite the legislative process for the proposed AI Regulation. But how quickly the measure moves through the institutions of the EU will depend on the political priorities of the EU Parliament and the EU Council and the extent of lobbying.
The EU legislative process requires both the Parliament and Council to approve the Commission's proposals, with opportunities for both sides to propose amendments (which may be shaped by lobbying from business, consumer bodies or other interested parties). The formal legislative process involves the draft law moving between the Parliament and Council, although in practice the institutions typically seek to reach consensus via informal trilogue meetings in which the Commission also participates. Finalising legislation can be a long and involved process, often lasting a couple of years – potentially more for contentious legislation.
Once the finalised legislation has been published, there will be a period of time for businesses to adapt their practices to be compliant with the new provisions – the draft AI proposals indicate 24 months, although there has been some discussion of cutting this back to 18 months.
If you would like to discuss any of the issues raised in this article, please contact one of the authors or your usual Osborne Clarke contact.