European Union negotiators reached a consensus on Friday regarding the world’s inaugural comprehensive regulations on artificial intelligence. This achievement sets the stage for legal oversight of AI technology, which holds the potential to revolutionize daily life but has also raised concerns about existential threats to humanity.
The negotiators, representing the European Parliament and the bloc’s 27 member countries, overcame significant differences on contentious issues such as generative AI and the use of face recognition surveillance by police. They signed a tentative political agreement for the Artificial Intelligence Act.
European Commissioner Thierry Breton announced the deal with a tweet just before midnight, stating, “The EU becomes the very first continent to set clear rules for the use of AI.” The agreement followed extensive closed-door talks lasting 22 hours in the initial session and continuing into a second round on Friday.
While officials aimed for a political victory with the flagship legislation, civil society groups gave it a reserved reception, expressing the need for technical details to be clarified in the upcoming weeks. Critics argued that the deal did not go far enough in safeguarding individuals from potential harm caused by AI systems.
Daniel Friedlaender, head of the European office of the Computer and Communications Industry Association, a tech industry lobby group, commented, “Today’s political deal marks the beginning of important and necessary technical work on important details of the AI Act, which are still missing.”
The European Parliament is expected to vote on the act early next year, and with the deal in place, this is seen as a formality, according to Brando Benifei, an Italian lawmaker co-leading the body’s negotiating efforts.
The proposed law, slated to take effect no earlier than 2025, includes the imposition of substantial financial penalties for violations, amounting to up to 35 million euros ($38 million) or 7% of a company’s global turnover.
Generative AI systems, exemplified by OpenAI’s ChatGPT, have gained widespread attention for their ability to produce human-like text, photos, and music. However, concerns have been raised about the potential risks these rapidly advancing technologies pose to jobs, privacy, copyright protection, and even human life.
The U.S., U.K., China, and other global coalitions have also introduced their own proposals to regulate AI, but Europe took an early lead with its initial draft in 2021. The recent surge in generative AI prompted European officials to revise the proposal, which is expected to serve as a blueprint for global AI regulation.
The AI Act, originally designed to address specific AI functions based on risk levels, has been expanded to include foundation models that underpin general-purpose AI services. These models, central to systems like ChatGPT and Google’s Bard chatbot, have become a focal point of discussion. Negotiators reached a tentative compromise despite opposition from France, advocating for self-regulation to support European generative AI companies competing with major U.S. counterparts.
The most advanced foundation models, categorized as posing “systemic risks,” will face additional scrutiny, including compliance with EU copyright law, technical documentation, risk assessment and mitigation, incident reporting, cybersecurity measures, and disclosure of energy efficiency.
Concerns over powerful foundation models being used for online disinformation, manipulation, cyberattacks, or bioweapon creation have been voiced by researchers. Additionally, rights groups emphasize the lack of transparency in the data used to train these models, posing risks to daily life as they form the basic structures for AI-powered services.
One of the most contentious issues in negotiations was AI-powered face recognition surveillance systems. A compromise was reached after intensive bargaining, allowing exemptions for law enforcement to use such systems in addressing serious crimes, despite calls for a full ban on public use due to privacy concerns.
Rights groups express reservations about exemptions and other loopholes in the AI Act, including the lack of protection for AI systems in migration and border control, and the option for developers to opt out of having their systems classified as high risk. Daniel Leufer from the digital rights group Access Now noted, “Whatever the victories may have been in these final negotiations, the fact remains that huge flaws will remain in this final text.”