In a groundbreaking development, European Union lawmakers reached a historic agreement late Friday on what they are hailing as the world’s first comprehensive law for regulating artificial intelligence (AI). The agreement includes provisions for the European Commission to adapt the pan-EU AI rulebook to keep pace with advancements in the rapidly evolving AI field.
One significant aspect of this legislation is the choice of terminology for regulating powerful AI models driving the current surge in generative AI tools. Instead of using industry-specific terms like “foundational” or “frontier” models, the EU Act refers to them as “general purpose” AI models and systems. This generic terminology aims to future-proof the law and prevent classification tied to specific technologies, such as transformer-based machine learning.
A Commission official explained, “In the future, we may have different technical approaches. And so we were looking for a more generic term.” For instance, “general purpose AI model” would encompass GPT-4, while “general purpose AI system” would encompass ChatGPT, where GPT-4 is integrated into ChatGPT.
The legislation includes two tiers of regulation for so-called general purpose AIs (GPAIs): a low-risk tier and a high-risk tier. High-risk rules for generative AI technologies, like OpenAI’s ChatGPT, are triggered based on a defined threshold set out in the law.
The agreed-upon threshold for high-risk GPAIs is 10^25 floating point operations (FLOPs), which was chosen to capture current-generation frontier models. However, lawmakers did not specifically consider whether this threshold would apply to models like GPT-4 or Google’s Gemini during negotiations.
Companies developing GPAIs will be responsible for self-assessing whether their models meet the FLOPs threshold and fall under the high-risk rules. The legislation allows for updates to the threshold over time based on technological evolution and the development of other benchmarks by the AI Office, a new expert oversight body within the Commission.
High-risk GPAIs will be subject to ex ante-style regulatory requirements aimed at assessing and mitigating systemic risks. This includes proactive testing of model outputs to reduce potential negative effects on public health, safety, security, fundamental rights, or society as a whole.
Low-tier GPAIs will have lighter transparency requirements, including obligations to apply watermarking to generative AI outputs. The watermarking requirement, originally focused on AI chatbots and deepfakes, will now apply to general purpose AI systems as well.
GPAI model makers must also adhere to EU copyright rules, including compliance with the EU Copyright Directive. The Act’s transparency requirements will apply to open-source GPAIs, with no exemption from copyright obligations.
The AI Office, responsible for setting risk classification thresholds for GPAIs, has not yet defined a budget or headcount. It is expected to work alongside a new scientific advisory panel to identify and address potential risks associated with advanced AI models.
While the full regulation will not come into force until around 2026, companies developing GPAIs are encouraged to follow codes of practice in the interim through the EU’s AI Pact. Specific prohibitions on AI use-cases will take effect six months after the law’s enactment, addressing concerns such as social scoring or facial recognition database scraping.
The text of the legislation is expected to be finalized and published in the EU’s Official Journal in early 2023, following votes in the European Parliament and Council.
This historic agreement on AI regulation marks a significant step forward in addressing the challenges and opportunities posed by artificial intelligence within the European Union.