EU ministers gave the green light to a common approach to AI legislation at a Telecoms Council meeting on Tuesday (December 6). EURACTIV provides an overview of the main changes.
The AI Act is a flagship legislative proposal to regulate artificial intelligence technology based on its potential for harm. The EU Council is the first co-legislator to complete the first stage of the legislative process, with the European Parliament due to finalize its version in March next year.
“The Czech Presidency’s final compromise text takes into account the main concerns of member states and preserves a delicate balance between the protection of fundamental rights and the promotion of AI technology,” said Ivan Bartos, the Czech Republic’s Deputy Prime Minister for Digitization.
How AI is defined was a critical part of the discussions as it defines the scope of regulation.
Member States were concerned that traditional software would be included, so they put forward a narrow definition of systems developed through machine learning, logic and knowledge-based approaches, elements that the Commission could specify or update later through delegated acts.
General Purpose AI
General purpose AI consists of large language models suitable for performing various tasks. As such, it was not initially within the purview of AI regulation, which envisioned only objective-based systems.
However, member states felt that leaving these critical systems out of scope would undermine the AI rulebook, while the specifics of this new market would require some tailoring.
The Czech Presidency resolved the issue by instructing the Commission to carry out an impact assessment and consultation on the adaptation of rules on AI for public purpose by implementing the law within a year and a half of the regulation coming into force.
The AI rulebook prohibits the use of technology for subliminal techniques, exploiting vulnerabilities and establishing Chinese-style social scoring altogether.
The prohibition on social scoring was extended to private actors to avoid circumvention of the prohibition by a contractor, while the concept of vulnerability was extended to socio-economic aspects.
High risk categories
Under Annex III, the regulations list uses of AI that are considered to have a high risk of harming people or property and, therefore, require compliance with strict legal obligations.
Notably, the Czech Presidency introduced an additional layer, namely that in order to be classified as high risk, the system must have a decisive weight in the decision-making process and not be ‘fully ancillary’, a concept left to the Commission to define. By enforcing the law.
Law enforcement authorities, crime analytics, and deepfake detection by verifying the authenticity of travel documents were removed from the council list. However, critical digital infrastructure and life and health insurance have been added.
Another important change is that the Commission can not only add high-risk use cases to the Annex, but also delete them under certain conditions.
Moreover, the obligation of high-risk providers to register in the EU database has been extended to public body users other than law enforcement.
High risk liabilities
High-risk systems must meet requirements such as dataset quality and detailed technical documentation. For the Czech Presidency, these provisions have been “clarified and arranged in such a way that they are technically more feasible and less burdensome for participants to comply with”.
The common approach seeks to articulate the allocation of responsibility along complex AI value chains and how AI law will interact with existing sectoral legislation.
Member States introduced a number of carve-outs in the text for law enforcement, some of which were intended as ‘bargaining chips’ for negotiations with the European Parliament.
For example, while users of high-risk systems are required to monitor the systems after launch and notify the provider of critical incidents, this obligation does not apply to sensitive information from law enforcement operations.
EU governments appear less interested in exempting AI applications related to national security, defense and the military from the scope of regulation, as well as the ability for police agencies to use ‘real-time’ remote biometric identification systems in exceptional circumstances.
Administration and Enforcement
The Council has enhanced the AI Board, which will gather competent national authorities, in particular the elements already present in the European Data Protection Board, by presenting it as a group of experts.
The general approach mandates that the Commission designate one or more testing facilities to provide technical support to the legislation and to receive guidance on how to comply with the legislation.
Fines for breaching AI obligations have been eased for SMEs, while a set of criteria has been introduced for national authorities to consider when calculating approval.
The AI Act includes the possibility to set up regulatory sandboxes, controlled environments overseen by an authority, where companies can test AI solutions.
The Council’s text allows such testing to occur under real-world conditions, but under certain conditions, this real-world testing may also occur unsupervised.
Transparency requirements for sentiment detection and deepfakes have been improved.
[Edited by Nathalie Weatherald]