- Trilogue negotiations between the Commission, Parliament and Council continue to clarify key differences on the AI Regulation.
- Controversial are exceptions for prohibited practices, regulation of foundation models and questions of governance and enforcement.
- France, Germany and Italy support mandatory codes of conduct for foundation models instead of untested standards.
-The AI Regulation is currently being Trilogue negotiated between the European Commission, the European Parliament and the Council of the EU. The Commission submitted a proposal in April 2021, the Council adopted its position on December 6, 2022 and the Parliament adopted its position on June 14, 2023. The first trilogue also took place on June 14, 2023, the second on July 18, 2023, the third on October 2 – 3, 2023, the fourth on October 17 and the fifth on October 24.
There was particular dissent on
- the Exceptions to the prohibited practices for criminal prosecution,
- the Regulation of foundational models (“basic models”) – Parliament wanted to provide for regulation here;
- Questions of the Governance and enforcemente.g. in the amount of the fines.
France, Germany and Italy have now apparently agreed – after a Report from Reuters – The “mandatory self-regulation through codes of conduct” is suitable for foundation models, in contrast to “untested standards”. Foundation models are AI models that are trained on large data sets and whose results are general enough to be used for various tasks; this also includes generative systems such as ChatGPT. The AI regulation does not regulate AI as a foundational technology, but rather its use. However, developers for foundational models should have to provide information about their learning model. This agreement should accelerate the further course of the debate.
An overview of the global activities of AI regulation can be found at at the IAPP.