In of the EU, the legislative process for the “Artificial Intelligence Act” (“AI Act” or “AI Regulation”) is on the home stretch after months of intensive negotiations. The two responsible committees (IMCO and LIBE) of the EU Parliament have voted in the May 11, 2023 vote on the Compromise text reached political agreement on the world’s first set of rules for artificial intelligence, after preliminary agreement was already reached in the two committees at the end of April 2023. The current draft will now be voted on in the plenum of the EU Parliament in mid-June (June 14). After that, the trilogue negotiations between Parliament, Council and Commission will start.
Initial situation
The AI Act is a legislative project of the European Commission to regulate artificial intelligence (AI). It was adopted by the April 21, 2021, as the first legislature to submit a comprehensive proposal to regulate AI. With the proposed legislation, the EU is attempting a balancing act, because on the one hand, the AI Act is intended to ensure that affected individuals do not suffer any disadvantages from the use of AI systems, and on the other hand, the new regulation is intended to further promote innovation and give as much room as possible to the development and use of AI.
There had been delays in the legislative process since the end of 2022. The reason for this was not only the 3000 amendments submitted, but also the emergence of generative AI (especially ChatGPT) and the discussion on how the AI Act should deal with it. In the draft AI Act of April 21, 2021, models such as ChatGPT did not yet play a role.
The AI Act is applicable to providers and users of AI systems. “Providers” are understood to be those actors who develop and place a system on the market, while “users” include those entities that use a system under their responsibility, excluding the personal, non-professional sector. Consumers, end users and other natural or legal persons affected by results of the systems are not covered.
Risk-based approach of the regulation
The core element of the AI Act is a risk-based approach that entails various requirements and prohibitions based on potential capabilities and risks. The higher the risk of an AI system to the health, safety or fundamental rights of individuals, the more stringent the regulatory requirements. The AI Act thus classifies AI applications into different risk categories with different consequences:
- Unacceptable risk(e.g. social scoring) – the use of corresponding AI systems is prohibited
- High risk (e.g., AI systems to be used for biometric identification of natural persons or for assessment of exams).
- Limited risk or no risk(e.g. spam filter)
Most important changes
The main changes compared to the Commission’s April 21, 2021 draft are as follows:
- Definition AI systems
- High-risk AI systems: additional layer for classification in high-risk categories and more extensive obligations for corresponding systems
- Prohibited AI systems: extended list
- Stricter rules for so-called Foundation Models and General Purpose AI
- Establishment of an AI office
- Six AI principles
Definition AI systems
A major point of discussion concerned the definition of AI or “AI systems”. Business and science criticized in particular the insufficient definitional clarity of AI systems, because the first definition in the draft of April 2021 could be applied to almost all forms of software. For this reason, the responsible members of parliament agreed on a new definition, which was aligned with the future definition of the OECD:
Art. 3(1): “Artificial intelligence system” (AI system) means a machine-based system designed to work with various Levels of autonomy operates and that can generate results such as predictions, recommendations, or decisions that affect physical or virtual environments for explicit or implicit purposes.”
Thus, for an AI system to fall within the scope of the AI Act, the system must be granted a degree of autonomy. This expresses a certain independence from the human operator or from human influence.
High-risk AI systems
One area that has been contentious both within the two parliamentary committees and will likely lead to discussion in the trilogue negotiations is the extensive list of high-risk applications (Annex III of the regulation). The original draft always classified AI systems that fall under the critical use cases listed in Annex III as high-risk. Parliamentarians now added an additional requirement: a high-risk AI system should only exist if it is also a significant risk (“significant risk”) for health, safety or fundamental rights entails. A significant risk is a risk which, due to the combination of its severity, intensity, probability of occurrence and duration of its effects, is substantial and may affect an individual, a large number of individuals or a specific group of individuals (cf. Art. 1b).
If AI systems fall under Annex III but providers believe there is no significant risk, they must notify the competent authority, which has three months to object. In the meantime, providers can place their system on the market – but if the assessment is incorrect, the provider can be sanctioned.
AI systems used to manage critical infrastructures such as energy networks or water management systems are now also classified as high risk, provided that these applications can lead to serious environmental risks. Recommendation systems from “very large online platforms” (more than 45 million users), as defined by the Digital Services Act (DSA), are also considered high risk. Furthermore, additional safeguards (e.g., documentation requirements) have been included for the process by which providers of high-risk AI systems can process sensitive data, such as sexual orientation or religious beliefs, to detect negative bias. AI systems that fall into the high-risk category must then record their environmental footprint according to the latest draft.
Extensive obligations are imposed on providers and users of high-risk AI systems, e.g., compliance assessment, risk management systems, technical documentation, record-keeping requirements, transparency and provision of information to users, human oversight, accuracy, robustness and cybersecurity, quality management systems, reporting of serious incidents and malfunctions, etc. Also, specified quality criteria for training, validation, and test data sets must be met.
Prohibited practices
A politically sensitive discussion centered on what type of AI systems should be banned because they pose an unacceptable risk. Nevertheless, this category was expanded: The use of biometric identification software would now be banned altogether. According to the compromise text, corresponding recognition software may only be used for serious crimes and with prior judicial authorization. The use of AI-based emotion recognition software in law enforcement, border management, the workplace and education would also be banned.
Next, “intentionally manipulative or deceptive techniques” are newly prohibited (although proving intent could be difficult). This prohibition does not apply to AI systems to be used for approved therapeutic purposes based on informed and explicit consent. Incidentally, the MEP ban on “predictive policing” has also been extended from felonies to misdemeanors.
“General Purpose AI” and “Foundation models”.
Preliminary remarks:
- Machine Learning (ML) is a subfield of AI.
- General Purpose AI (GPAI; German: generative AI) is again a subfield of ML that can generate new content such as text, images, video, code, etc. as a result of a prompt.
- Foundation Models (FMs; German: Basismodelle). This is a deep learning application, usually trained on a wide range of data sources and large data sets to perform a wide range of tasks, including those for which they were not specifically developed and trained. FMs are a variant of GPAI.
- A Large Language Model (LLM) is a sub-variant of FMs. LLM is a language model that emulates a neural network.
- GPT is a series of LLMs from OpenAI that has been under development since 2018. The latest version is GPT‑4.
The draft AI Act of April 21, 2021, lacked references to AI systems without a specific purpose (General Purpose AI). This changes with the current compromise text. The rise of ChatGPT and other generative AI systems has led parliamentarians to also consider “General Purpose AI systems” (GPAI) and “Foundation Models” to want to regulate.
Initially, calls for a ban or permanent classification of ChatGPT and similar AI systems in the high-risk category were discussed. However, the current compromise text does not classify GPAI as high-risk per se. It is only when vendors integrate GPAI into their AI systems that are considered high-risk that the strict requirements of the high-risk category also apply to GPAI. In this case, GPAI providers must assist downstream providers in complying by providing information and documentation about the AI model.
Stricter requirements are also proposed for Foundation Models. These relate, for example, to risk management, quality management, data management, security and cybersecurity, and the degree of robustness of a foundation model. Art. 28b of the compromise text regulates the obligations of the providers of a Foundation Model regardless of whether it is provided as a standalone model or embedded in an AI system or product, under free and open source licenses, as a service or through other distribution channels. In addition to a number of detailed transparency obligations (reference to Art. 52; e.g., disclosure to natural persons that they interact with an AI system), providers of Foundation Models should also be required to provide a “sufficiently detailed” summary of the use of copyright-protected training data (Art. 28b(4)(c)). It is not clear how this is to be implemented for companies such as OpenAI, because ChatGPT, for example, was trained on a dataset of over 570 GB of text data.
New AI principles
Finally, the compromise text with Art. 4a contains so-called “General Principles applicable to all AI systems”. All actors covered by the AI Act should develop and deploy AI systems and foundation models in accordance with the following six “AI principles”:
- Human action and control: AI systems should serve humans and respect human dignity and personal autonomy, and function in such a way that they can be controlled and monitored by humans.
- Technical robustness and safety: Unintended and unexpected damage should be minimized, and AI systems should be robust in the event of unintended problems.
- Data protection and data governance: AI systems should be developed and deployed in compliance with data protection regulations.
- TransparencyTraceability and explainability must be possible, and people must be made aware that they are interacting with an AI system.
- Diversity, non-discrimination and fairness: AI systems should engage diverse stakeholders and promote equal access, gender equality, and cultural diversity, and conversely avoid discriminatory effects.
- Social and environmental well-being: AI systems should be sustainable, environmentally friendly, and developed and used for the benefit of all people.
Establishment of a European AI Office
There was agreement in both parliamentary committees that the enforcement architecture should include a central element, particularly to support the harmonized application of the AI Act and for cross-border investigations. For this reason, the establishment of an AI Office was proposed. In the new compromise text (Art. 56 ff.), the tasks of this office are explained in detail.
Sanctions
Violations of the AI Act can result in severe fines, similar to the DSGVO. Violations of prohibitions or high-risk systems data governance requirements are subject to fines of up to EUR 30 million or 6% of global annual revenue, whichever is greater.
International scope: Impact on Switzerland
Swiss providers who place AI systems on the market or put them into operation in the EU are also covered by the territorial scope of the AI Act. Next, the AI Act applies to Swiss providers and users of AI systems if the result produced by the AI system is used in the EU.
Then there will probably also be the so-called “Brussels effect” in Switzerland. Many Swiss AI providers will develop their products not only for Switzerland, which means that the new European standards of the AI Act are likely to become established in Switzerland as well.
Further procedure and entry into force
There could well be surprises in the parliamentary plenary vote in mid-June; however, the parliament’s position is largely consolidated. Once the Parliament has formally adopted its position, the draft will enter the final phase of the legislative process: the so-called trilogue negotiations, in which representatives of the EU Council, the EU Parliament and the EU Commission agree on a final text. However, the AI Act is not expected to be passed before the end of 2023 and will thus in force in mid-2024 at the earliest enter into force. There will then be a two-year implementation period. However, the provisions on notifying authorities and bodies as well as the provisions on the European Committee for Artificial Intelligence and the competent national authorities are to take full effect as early as three months after entry into force. Also Art. 71 (Sanctions) is already applicable 12 months after entry into force.
Even if it will take some time before the regulation will be relevant for (Swiss) companies, they should familiarize themselves with the current draft.