Legislators worldwide move to adopt regulation by design
Published on 24th Jan 2022
A new form of regulation is emerging that will help keep pace with developments in artificial intelligence
There is new global trend in the legislative approach to evolving technologies, led by parts of the General Data Protection Regulation (GDPR), whereby flexible frameworks are set for compliant product design rather than rigid rules designed for enforcement after the event. We see it in parts of the GDPR, in large parts of the Digital Services Act in the regulation of online safety, and we are now seeing it in the regulation of artificial intelligence.
What's more, this trend is having a positive cultural effect, particularly in the technology industry where market leaders are looking to get ahead of the regulation rather than lag behind it – designing their products and services to comply with legislation that is only in draft form.
Traditionally, tech vendors would see their products investigated ex-post and after the risk has materialised. If and when found liable, they would have had to take measures to correct their processes and compensate for any harm. This reactive model, which has always struggled to keep pace with developments in technology, is becoming obsolete. Instead, regulators and legislators are pushing companies to develop compliance and regulatory teams around "product counsels" and anticipate harms and risks posed by a given product from the inception stage.
This type of regulation is of course not without its rules. However, such rules are flexible – imposing standards such as "appropriate technical and organisational measures" that can be adapted depending on the company or product/service in question. Privacy by design and privacy by default are key concepts and are now the bedrock of digital regulation. Engineers and developers will need to address legal and regulatory constraints from the inception of their digital products.
Nowhere is this challenge more pressing than when it comes to the use of AI. Products using any sort of digital interface or information technology are increasingly likely to use AI, meaning that data sets for the AI software are an inextricable element of their design and use. However, the advantages of AI also bring with them challenges of bias and other harmful effects. In practice, this means that manufacturers and distributors that use IT systems, especially AI or machine learning-based algorithms, must take regulation into account from the earliest stages of developing products that use digital data.
Force for good?
The European Union has been a pioneer in regulating AI and in data policy more broadly. Its ambitious and human-centred approach is summarised in the draft text for legislation to harmonise the AI rules: "AI should be a tool for people and be a force for good in society with the ultimate aim of increasing human well-being". This is similar to the EU's approach to the regulation of data and privacy in the GDPR, the Digital Services Act and the Digital Markets Act.
The draft text makes it clear that "trustworthy AI" is a cornerstone of the European Commission's plans to incentivise technological innovation in Europe. And, for AI to be trustworthy, it must bring positive value to society and respect fundamental human rights. But how can trustworthy AI be achieved? The draft states that the use of AI must be properly designed from "the lab to the market" and shows the same tenacity that was formalised in the GDPR regulations. Violation of the high-risk use of AI brings with it potential penalties of €30 million or 6% of global revenue, with lower maximum penalties for lower-risk infractions. The ambitious scale of these penalties – which are also applicable to companies based outside of the EU that provide any AI output within the EU – further incentivises privacy and compliance by design or default.
International regulatory activity
While the draft legislation was the first of its kind when it was published in May 2021, other states and institutions have already followed suit – and the European model is having global implications for businesses and consumer well-being. In the United States, the Department of Commerce and the Federal Trade Commission have created committees and drafted texts that aim to eliminate AI bias, improve algorithm oversight and provide transparency to consumers.
Meanwhile, the United Kingdom published its 10-year strategy plan for artificial intelligence in September 2021, emphasising the important role that AI would play in market innovation and the need for this to be coupled with increased regulation to safeguard consumers from the risks associated with AI.
In China, the National Governance Committee for the New Generation Artificial Intelligence, under the Ministry of Technology, issued in September 2021 the "Ethical Norms for the New Generation Artificial Intelligence". The Chinese government aims to integrate ethics into the lifecycle of AI and has introduced six principal requirements as the foundation for AI ethical norms. These include improving the well-being of human beings, protecting privacy and security, enhancing fairness and justice, ensuring controllability and trustworthiness, and strengthening accountability.
This represents China’s active approach to address the long-term social transformations and challenges that AI will bring. As a further step in response to society’s growing concern over the increased privacy challenges brought by the adoption of AI, the new Personal Information Protection Law – China’s equivalent of GDPR – also introduces specific requirements on AI-based automated decision-making and calls for the results to be transparent, fair and impartial.
Two-pronged approach
European regulators are striving to protect human rights and consumer trust while simultaneously encouraging investment, innovation, and market growth. This two-pronged approach will offer opportunities to developers and investors that do not shirk from the challenge of privacy by design. By changing how AI technology is integrated into goods and services, regulators hope to create a social, political and economic landscape that can enhance the transformative potential of AI.