- EU institutions reached an agreement on the AI Act, including the inclusion of foundation models and a tiered regime with transparency obligations and mandatory labeling of AI content.
- Stronger regulation for “systemic” foundation models, strict prohibitions (e.g. biometric categorization, emotion recognition, social scoring) and sanctions of up to 7% turnover.
After several days of debate, representatives of the European Parliament and the Council finally reached an agreement on Friday. Agreement on the AI Act achieved (Media release). The AI Act has thus taken a decisive step closer to being adopted. However, it still has to be formally adopted in votes in Parliament and the Council.
On Thursday, a provisional compromise was reached on the Inclusion of Foundation Models has been found, a pièce de résistance is under discussion (see here). The definition of the systems covered by the AI Act is now apparently based on those of the OECD which, following an amendment on November 8, 2023, no longer requires AI systems to pursue objectives set by humans, but now reads as follows
An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.
This definition includes foundation models. Foundation models are so named because these machine learning models are suitable for a wide range of applications due to their training with extensive data (Spain proposed the following definition in the discussion about the AI Act: an “AI model that is capable to competently perform a wide range of distinctive tasks”).
Because foundation models are not limited to a specific application, they are poorly covered by the Commission’s AI Act proposal. In the agreement that has now been reached obviously one proposed by Spain tiered approach adopted. Transparency obligations apply to all models (e.g. with regard to training), AI-generated content must be recognizable as such and copyright must – of course – be observed.
Certain foundation models are referred to as “systemic” – are more heavily regulated, i.e. those that pose a systemic risk. These include models that have been trained with particularly high computing power. For example, they must be subject to evaluation and testing (Red Team), system risks must be assessed and mitigated, the Commission must be informed of serious incidents, cyber security must be ensured and their energy efficiency must be reported.
A compromise was also reached on how to deal with Open source models has been achieved. Free and open source systems should only be covered by the AI Act if they are a prohibited practice, fall into the category of a high-risk system or are suitable for manipulation.
On Friday, the debate appeared to focus in particular on how to deal with biometric recognition systems in public spaces and the question of whether public authorities should be allowed to use biometric systems to categorize people according to criteria such as gender, race, religion, etc., to recognize emotions or for police work (“predictive policing”). Some Member States consider the use of such practices to be appropriate for security purposes, e.g. France for the 2024 Olympic Games.
With the High-risk systems Among other things, a Fundamental Rights Impact Assessment is to become mandatory. Also with regard to the prohibited practices an agreement has been reached. The following practices are to be prohibited:
- biometric categorization systems that use particularly sensitive data or information (e.g. political, religious or philosophical beliefs, sexual orientation, race);
- untargeted reading of facial images from the Internet or video surveillance systems to create corresponding databases;
- Emotion recognition in the workplace and in educational institutions;
- Social scoring based on social behavior or personal characteristics;
- AI systems that manipulate people;
- AI systems that exploit weaknesses (due to age, disability or social or economic situation).
Depending on the type of violation and the size of the company, the upper limits for the sanctions were set at EUR 35M or 7% of global turnover or EUR 7.5M or 1.5 % of turnover.
An office for artificial intelligence is also to be set up within the EU Commission (the AI Office). The competent national authorities will also meet here to ensure uniform application of the AI Act.