FAQ on the AI Act

By David Vasel­la, Ver­si­on 1.0, Sep­tem­ber 22, 2024

The aut­hor thanks for valuable advice Ami­na Cham­mah, Lena Göt­zin­ger, Han­nes Meyle and Ken­to Reu­ti­mann (all Wal­der Wyss), and for fruitful dis­cus­sions David Rosen­thal (Vischer).

For infor­ma­ti­on on errors to

we are grateful.

Over­view

The appli­ca­bi­li­ty of the AI Act and the defi­ni­ti­on of the roles of pro­vi­der and deployer – the­re are also other roles – can be illu­stra­ted as follows:

Basics

1. which terms are used in this FAQ?

In addi­ti­on to the legal­ly defi­ned terms (→ 8), the­se FAQs use the fol­lo­wing abbreviations:

AIArti­fi­ci­al intelligence

AIA AI Act (AI Regu­la­ti­on). Refe­ren­ces to artic­les wit­hout other infor­ma­ti­on refer to the AIA

AISAI system (AI system)

FOSS Free and Open-Source Soft­ware (free and open-source license)

GPAIGene­ral-Pur­po­se AI (AI with a gene­ral purpose)

GPAIM: Gene­ral-Pur­po­se AI Model (AI model with gene­ral purpose)

GPAISGene­ral-Pur­po­se AI System (AI system with gene­ral purpose)

HRAISHigh-Risk AIS (AI system with high risks)

QMSQua­li­ty manage­ment system

RMSRisk manage­ment system

2 What is the AIA?

The “Regu­la­ti­on (EU) 2024/1689 of 13 June 2024 lay­ing down har­mo­ni­zed rules on arti­fi­ci­al intel­li­gence and amen­ding […]”, the Arti­fi­ci­al Intel­li­gence Regu­la­ti­on, AI Regu­la­ti­on, AI Act or AIA) is the com­pre­hen­si­ve regu­la­to­ry frame­work with which the EU (or the EEA, the AIA is of EEA rele­van­ce) regu­la­tes the use of AI systems (AI systems, AIS).

An Eng­lish-lan­guage online ver­si­on of the AIA with a non-bin­ding clas­si­fi­ca­ti­on of the reci­tals can be found at data lawas well as a PDF version.

It is a regu­la­ti­on, so like the GDPR it is direct­ly appli­ca­ble. Howe­ver, the com­pe­tent aut­ho­ri­ties will be able to spe­ci­fy and amend some points (→ 51).

In terms of sub­stance, the AIA first defi­nes its mate­ri­al and geo­gra­phi­cal scope of appli­ca­ti­on and lays down rules for the deve­lo­p­ment and use of AIA, espe­ci­al­ly for “High-Risk AI Systems” (HRAIS) and for AIS with a gene­ral pur­po­se (i.e. use case-agno­stic, broad­ly appli­ca­ble AIS – so-cal­led “Gene­ral-Pur­po­se AI”, GPAI; see → 39). Cer­tain par­ti­cu­lar­ly unde­si­ra­ble prac­ti­ces (use cases) are also pro­hi­bi­ted (→ 27).

3. whe­re can I find more infor­ma­ti­on on the AI Act?

The aca­de­mic review of the AI Act is under­way, but is still in its infan­cy. From the gene­ral Swiss lite­ra­tu­re, refe­rence should be made (incom­ple­te­ly) to the fol­lo­wing articles:

  • Rosen­thal, The EU AI Act – Regu­la­ti­on on Arti­fi­ci­al Intel­li­gence, Jus­let­ter of August 5, 2024 (https://dtn.re/tLrdFm)

  • Ario­li, Risk Manage­ment under the EU Regu­la­ti­on on Arti­fi­ci­al Intel­li­gence, Jus­let­ter IT of July 4, 2024 (https://dtn.re/7iE4zb)

  • Houdrouge/Kruglak, Are Swiss data pro­tec­tion rules rea­dy for AI?, Jus­let­ter of Novem­ber 27, 2023 (https://dtn.re/KvghSt)

  • Mil­ler, The EU Arti­fi­ci­al Intel­li­gence Act: A risk-based approach to the regu­la­ti­on of arti­fi­ci­al intel­li­gence, EuZ 1/2022 (https://dtn.re/PafzEb)

Spe­cial lite­ra­tu­re can be found in par­ti­cu­lar on copy­right issues in con­nec­tion with gene­ra­ti­ve AI (e.g. Thouvenin/Picht, AI & IP: Recom­men­da­ti­ons for legis­la­ti­on, appli­ca­ti­on of the law and rese­arch on the chal­lenges at the inter­faces of [AI and IP], sic! 2023, 507 ff.), to Lia­bi­li­ty issues (e.g. Qua­dro­ni, Künst­li­che Intel­li­genz – prak­ti­sche Haf­tungs­fra­gen, HAVE 2021, 345 ff.), to labor law topics (e.g. Wild­ha­ber, Künst­li­che Intel­li­genz und Mit­wir­kung am Arbeits­platz, ARV 2024 1 ff.).

Fur­ther infor­ma­ti­on can be found on an ongo­ing basis at www.datenrecht.ch and on the blog of Vischer (https://dtn.re/BAG7Il).

The fol­lo­wing works from the for­eign legal lite­ra­tu­re should be men­tio­ned in particular:

  • Voigt/Hullen, Hand­book AI Regu­la­ti­on FAQ on the EU AI Act, 2024 (Kind­le E‑Book: https://dtn.re/bIwQg3)

  • Wendt/Wendt, The New Law of Arti­fi­ci­al Intel­li­gence, 2024 (Kind­le E‑Book: https://dtn.re/kFmWjk)

Refe­rence should be made to the non-legal or non-pri­ma­ri­ly legal literature:

  • Gas­ser/­May­er-Schön­ber­ger, Guar­drails: Gui­ding Human Decis­i­ons in the Age of AI, 2024, a dis­cus­sion of frame­works (laws, norms, and tech­no­lo­gies) for decis­i­on-making, the chal­lenges of digi­tal decis­i­ons, and pos­si­ble prin­ci­ples for guar­drails (Kind­le e‑book: https://dtn.re/nYx3pm)

  • Strüm­ke, Arti­fi­ci­al Intel­li­gence (Kind­le E‑Book: https://dtn.re/eOI7vU); a fair­ly com­pre­hen­si­ve and rea­da­ble intro­duc­tion to the histo­ry of the area, tech­ni­cal issues, risks and weak points and spe­cu­la­ti­ons on fur­ther development.

4 How did the nego­tia­ti­on of the AI Act proceed?

The Euro­pean Com­mis­si­on pre­sen­ted its pro­po­sal on April 21, 2021 (Pro­po­sal of the Euro­pean Com­mis­si­on of April 21, 2021, https://dtn.re/JSQJtF)), with stric­ter regu­la­ti­ons on trans­pa­ren­cy and tracea­bi­li­ty being a par­ti­cu­lar con­cern. The regu­la­ti­on of AI models that are sui­ta­ble for wide­spread use (“Gene­ral-Pur­po­se AI Model”, GPAIM; at the time often refer­red to as the “Foun­da­ti­on Model”) was alre­a­dy the sub­ject of inten­se deba­te at the time.

In the fol­lo­wing Tri­lo­gue nego­tia­ti­onsthe infor­mal nego­tia­ti­on pro­ce­du­re, in which repre­sen­ta­ti­ves of the Par­lia­ment, the Coun­cil and the Com­mis­si­on seek a com­pro­mi­se, the issue of GPAI remain­ed a point of con­ten­ti­on until the end, when a com­pro­mi­se was rea­ched on Decem­ber 9, 2023. This cour­se of events explains the sepa­ra­te and remar­kab­ly brief regu­la­ti­on of GPAI in Chap­ter V (→ 39 ff.).

On May 21, 2024 the Coun­cil appro­ved the out­co­me of the nego­tia­ti­ons. The AI Act was adopted on July 12, 2024 published in the Offi­ci­al Jour­nal of the Euro­pean Uni­on (OJ L, 2024/1689, https://dtn.re/0OYJXY).

5 When will the AEOI take effect?

The AEOI ente­red into force on August 1, 2024, 20 days after its publi­ca­ti­on in the Offi­ci­al Jour­nal. Its pro­vi­si­ons will take effect gra­du­al­ly (Art. 113):

  • Febru­ary 2, 2025Chap­ters I and II (gene­ral pro­vi­si­ons and pro­hi­bi­ted prac­ti­ces) take effect.

  • August 2, 2025Cer­tain requi­re­ments, inclu­ding report­ing obli­ga­ti­ons and sanc­tions, beco­me effec­ti­ve. This con­cerns the pro­vi­si­ons on noti­fy­ing aut­ho­ri­ties and noti­fi­ed bodies (Chap­ter III Sec­tion 4), the requi­re­ments for GPAIM (Chap­ter V), gover­nan­ce in the EU (Chap­ter VII) and sanc­tions (Chap­ter XII) as well as the pro­vi­si­ons on the aut­ho­ri­ties’ duty of con­fi­den­tia­li­ty (Art. 78);

  • August 2, 2026Most of the pro­vi­si­ons take effect, espe­ci­al­ly tho­se for HRAIS, with the fol­lo­wing exception;

  • August 2, 2027The pro­vi­si­ons for HRAIS also app­ly within the scope of Art. 6 (1), i.e. for AIS that are instal­led as a safe­ty com­po­nent of a pro­duct in accordance with Annex I or used as such.

6. are the­re tran­si­tio­nal pro­vi­si­ons in the AIA?

Yes, a few, accor­ding to Art. 111:

  • In prin­ci­ple, the AIA will not app­ly to HRAIS ope­ra­tors until August 2, 2030, when the HRAIS pla­ced on the mar­ket or put into ope­ra­ti­on befo­re August 2, 2026 were made. Howe­ver, we reser­ve the right to make signi­fi­cant chan­ges at a later date.

  • Pro­vi­der of GPAIM are only sub­ject to the AEOI from August 2, 2027 if the GPAIM was pla­ced on the mar­ket befo­re August 2, 2025.

  • Art. 111 pro­vi­des that AIS, which are used as com­pon­ents in Lar­ge-sca­le IT systems in the public sec­tor in accordance with Annex X only have to be com­pli­ant by the end of 2030. This con­cerns the Schen­gen or Visa Infor­ma­ti­on System and simi­lar systems.

7 What is “arti­fi­ci­al intel­li­gence” (AI)?

The term “arti­fi­ci­al intel­li­gence” (“AI”) refers to the beha­vi­or of a com­pu­ter that is not and can­not be intel­li­gent, but looks like intel­li­gence from the out­side. A defi­ni­ti­on from the Euro­pean Par­lia­ment also goes in this direc­tion: “Arti­fi­ci­al intel­li­gence is the abili­ty of a machi­ne to imi­ta­te human abili­ties such as logic and crea­ti­vi­ty”. For exam­p­le, the well-known Turing test becau­se a per­son can no lon­ger reco­gnize whe­ther the per­son they are tal­king to is human or machine.

The distinc­tion bet­ween arti­fi­ci­al intel­li­gence and deter­mi­ned systems is the­r­e­fo­re not qua­li­ta­ti­ve, but ulti­m­ate­ly quan­ti­ta­ti­ve. Arti­fi­ci­al intel­li­gence is what it looks like becau­se a machi­ne arri­ves at a result that was not deter­mi­ned by a human being – or appears to be so: Deter­mi­ned are also com­plex systems – they only appear intel­li­gent becau­se their result is sur­pri­sing, which is due to the fact that a machi­ne decis­i­on is not pos­si­ble due to the par­ti­cu­lar com­ple­xi­ty and lack of access to trai­ning data fac­tu­al is not com­pre­hen­si­ble in all respects. This also makes it dif­fi­cult to inter­pret the con­cept of the AI model under the AIA (→ 13).

8 What terms does the AIA define?

The AEOI defi­nes a total of 68 terms in Art. 2. They are sub­se­quent­ly used in the AEOI wit­hout any expli­cit refe­rence being made to the defi­ni­ti­on in the rele­vant artic­le – you the­r­e­fo­re often have to return to Art. 2 when rea­ding, espe­ci­al­ly as terms are also legal­ly defi­ned for which you would not neces­s­a­ri­ly expect this (e.g. “risk” or “wide­spread violation”).

To make mat­ters worse, some terms are used in Ger­man (“Inver­kehr­brin­gen”) and others in Eng­lish (“Pro­vi­der”, “Deployer”). A com­pa­ri­son of the Ger­man and Eng­lish equi­va­lents can the­r­e­fo­re be found in the appen­dix to this FAQ.

9 What role do sta­tis­tics play in the field of AI?

Rela­ti­on­ships bet­ween data are map­ped using sta­tis­ti­cal models. This does not mean that sta­tis­tics in its­elf is a form of AI. Sta­tis­ti­cal methods are mathe­ma­ti­cal models that are used in both AI and deter­mi­ni­stic approa­ches. Howe­ver, machi­ne lear­ning (→ 10) and other approa­ches gene­ral­ly work with sta­tis­ti­cal methods.

An important such method is, for exam­p­le, the Regres­si­on ana­ly­sis. It deter­mi­nes the fac­tors (varia­bles) that are decisi­ve for a result (or the strength of the influence of a varia­ble on a result), which allo­ws a cor­re­spon­ding fore­cast to be made. If the x‑axis on a dia­gram is the num­ber of visi­tors to an exhi­bi­ti­on and the y‑axis is the rain­fall, the points on the dia­gram indi­ca­te the num­ber of visi­tors depen­ding on the rain­fall. If you draw a line that mathe­ma­ti­cal­ly best fits all the points (“regres­si­on line”), it explains the rela­ti­on­ship bet­ween the axes or varia­bles, in this case how the rain affects the num­ber of visi­tors. It can also indi­ca­te how high the devia­ti­on of the data points from the tar­get is, i.e. the error ran­ge of the line, the fluc­tua­ti­on ran­ge, the degree of relia­bi­li­ty of the regres­si­on line (usual­ly repre­sen­ted by “R2” or “r2”; an R2 value of 0.73 means that 73% of the data is explai­ned by the regres­si­on line).

line­ar regres­si­on is based on the hypo­the­sis that the tar­get value (the num­ber of visi­tors) depends line­ar­ly on one varia­ble (the rain­fall), or that the mar­ket value of a pro­per­ty reacts in the same way to a chan­ge in land area and loca­ti­on. Here, a straight line is drawn through the data points, and fur­ther values (the num­ber of visi­tors, the pro­per­ty value) can be deter­mi­ned on this basis. This makes simp­le pro­gno­stic models pos­si­ble. With the non-line­ar regres­si­on a cur­ved line is for­med, for exam­p­le, becau­se a non-line­ar rela­ti­on­ship is to be repre­sen­ted (e.g. if the num­ber of visi­tors only falls when it rains hea­vi­ly and not when it drizz­les, or if sales figu­res fall more shar­ply when pri­ces rise after a cer­tain pri­ce – the pri­ce thres­hold). A deter­mi­ned logic is also used here.

Other sta­tis­ti­cal methods are, for exam­p­le Clu­ster ana­ly­ses. It is not about a spe­ci­fic line­ar or non-line­ar rela­ti­on­ship bet­ween varia­bles, but about quan­ti­fy­ing rela­ti­on­ships bet­ween data using distance or simi­la­ri­ty mea­su­res and cate­go­ri­zing objects with a low distance mea­su­re into a com­mon group. In two- or mul­ti-dimen­sio­nal data (“data clouds”), clu­sters have a com­mon cen­ter of gra­vi­ty, and clu­ster ana­ly­ses are used to find the­se cen­ters of gra­vi­ty and assign data to the clu­ster who­se cen­ter is clo­sest. This can be used, for exam­p­le, to assign poten­ti­al bor­ro­wers to a clu­ster when gran­ting loans and to allo­ca­te loan con­di­ti­ons on this basis.

This allo­ws para­me­tric from non-para­me­tric models. In non-para­me­tric regres­si­on, the rela­ti­on­ship bet­ween the varia­bles is not pre­de­ter­mi­ned, but is first deri­ved from exi­sting data accor­ding to various cri­te­ria, e.g. for mode­ling eco­no­mic data, inve­sti­ga­ting pol­lut­ant con­cen­tra­ti­ons or fore­ca­sting share pri­ces. Para­me­tric sta­tis­tics, on the other hand, pre­sup­po­se that the data used cor­re­sponds to a spe­ci­fic sta­tis­ti­cal dis­tri­bu­ti­on cha­rac­te­ri­zed by a fixed num­ber of parameters.

10 What is “Machi­ne Lear­ning” (ML)?

From a tech­ni­cal per­spec­ti­ve, AI is the branch of com­pu­ter sci­ence that deals with the deve­lo­p­ment of cor­re­spon­ding systems. The most important tech­no­lo­gy in this field is “Machi­ne lear­ning„, „Machi­ne Lear­ning” or “ML”. It is not a syn­onym for AI becau­se ML essen­ti­al­ly ser­ves to reco­gnize pat­terns and deri­ve pre­dic­tions based on them, while AI attempts to sol­ve a task.

ML is inten­ded to enable a com­pu­ter to “learn” on the basis of data, i.e. to deri­ve know­ledge from data. Howe­ver, “know­ledge” is the wrong term. The old distinc­tion bet­ween deduc­tion and induc­tion is important here. deduc­ti­ve con­clu­si­ons a rule given as true is applied, and the results deri­ved from the rule can be con­side­red as true as the rule its­elf (rule: all fish can swim; input: Wan­da is a fish; result: Wan­da can swim). The­re is also the Abduc­tion. This is becau­se hea­da­ches can have many cau­ses; the con­clu­si­on would the­r­e­fo­re be inad­mis­si­ble. Abduc­tion the­r­e­fo­re works through seve­ral pos­si­ble cau­sal chains to find the most pro­ba­ble cau­se. Such systems are com­mon; a well-known exam­p­le is the “CADUCEUS” dia­gno­stic system. With induc­ti­ve con­clu­si­ons on the other hand, an assu­med rule is infer­red from infor­ma­ti­on. Machi­ne lear­ning often pro­ce­eds induc­tively: Sta­tis­ti­cal­ly based state­ments are gene­ra­ted from data, which are more or less con­vin­cing hypo­the­ses, but can­not cla­im to be true or objec­ti­ve. Howe­ver, the boun­da­ries are flu­id becau­se the­se approa­ches can also be combined.

ML the­r­e­fo­re enables a machi­ne to obser­ve data and use it to gene­ra­te pre­dic­tions or hypo­the­ses that are more or less pro­ba­ble, i.e. that fit the input data more or less well. The expl­ana­ti­on of the hypo­the­sis for­med in this way – e.g. the con­clu­si­on of a cor­re­la­ti­on to cau­sa­li­ty – is out­side of ML; it is a form of heu­ri­stics, not ML. This is why ML often reli­es on lar­ge amounts of data: The stret­ched pat­terns, the rela­ti­on­ships bet­ween data, only beco­me obser­va­ble in the mass.

Reco­gnizing pat­terns means gene­ra­li­zing. The bet­ter a model – models are mathe­ma­ti­cal func­tions – can gene­ra­li­ze, the more powerful it is. As men­tio­ned, trai­ning is used for this pur­po­se. If the trai­ning uses too litt­le data, it can­not draw relia­ble con­clu­si­ons – this is known as under­fit­ting or “under­fit­ting”. Con­ver­se­ly, a model can learn the input data too well, in extre­me cases it lear­ns it by heart. Then it fits the input data, but is unable to gene­ra­li­ze, like a per­son who has a good memo­ry but does not think. This over­fit­ting is cal­led “over­fit­ting”. For the trai­ning of an ML, vali­da­ti­on and test data sets are the­r­e­fo­re used in addi­ti­on to the trai­ning data in order to redu­ce both over­fit­ting and under­fit­ting and to impro­ve or at least esti­ma­te the vali­di­ty – the relia­bi­li­ty of the gene­ra­li­zing hypothesis.

ML uses sta­tis­ti­cal models (→ 9), e.g. line­ar regres­si­on pri­ma­ri­ly for super­vi­sed lear­ning or clu­ster ana­ly­ses for unsu­per­vi­sed lear­ning. ML can use both para­me­tric and non-para­me­tric methods. Para­me­tric models in ML use a fixed model struc­tu­re, but the values of the para­me­ters are opti­mi­zed through trai­ning. An exam­p­le is line­ar regres­si­on, when a model lear­ns to pre­dict real estate pri­ces becau­se it lear­ns to reco­gnize the sta­tis­ti­cal rela­ti­on­ships bet­ween cer­tain para­me­ters and pri­ces more relia­bly during trai­ning. The­se models the­r­e­fo­re requi­re cer­tain assump­ti­ons to be made about the data, but do not rule out “lear­ning” through trai­ning. In con­trast, non-para­me­tric models in ML do not have a fixed struc­tu­re or a fixed num­ber of para­me­ters. Examp­les are decis­i­on trees that are impro­ved in the cour­se of trai­ning. They are also sta­tis­ti­cal models that deter­mi­ne the best pos­si­ble “split” at each node on the basis of sta­tis­ti­cal criteria.

A bet­ter cri­ter­ion for distin­gu­is­hing bet­ween ML and deter­mi­ni­stic approa­ches is the­r­e­fo­re the basic pro­ce­du­re: Deduc­ti­ve methods use cer­tain basic assump­ti­ons and draw con­clu­si­ons from them, while Induc­ti­ve methods gene­ra­te pos­si­ble rules through a trai­ning pro­cess with incre­a­sing relia­bi­li­ty. Rule gene­ra­ti­on is the­r­e­fo­re a key fac­tor in dif­fe­ren­tia­ting bet­ween deter­mi­ni­stic and non-deter­mi­ni­stic approa­ches. One exam­p­le is decis­i­on trees that are not pre­de­fi­ned but gene­ra­ted by trai­ning – the model deter­mi­nes in trai­ning tho­se rules that best sepa­ra­te or explain the trai­ning data, i.e. that have the grea­test infor­ma­ti­ve value for a tar­get varia­ble (e.g. cre­dit­wort­hi­ness). The­se rules can be inter­pre­ted and reu­sed. Ano­ther exam­p­le is asso­cia­ti­on ana­ly­ses, which repre­sent rela­ti­on­ships in lar­ge amounts of data and gene­ra­te rules that descri­be fre­quent cor­re­la­ti­ons. In shop­ping bas­ket ana­ly­sis, for exam­p­le, a rule such as “If you buy dia­pers on Fri­day evening, you also buy beer” can be gene­ra­ted. The­se rules are also expli­cit and can be interpreted.

Expert systems are systems that con­tain a know­ledge base, e.g. appli­ca­ti­on-spe­ci­fic if/then rules. The system applies rules to this know­ledge base in order to deri­ve fur­ther facts or con­clu­si­ons (infe­rence). The system can spe­ci­fy pro­ba­bi­li­ties and may also work with impre­cise infor­ma­ti­on (“fuz­zy logic”). One exam­p­le of an expert system is the well-known “Mycin”, a system deve­lo­ped at Stan­ford Uni­ver­si­ty in the 1970s to sup­port the use of anti­bio­tics. Based on para­me­ters such as patho­gen type, dise­a­se pro­gres­si­on and labo­ra­to­ry data, the system was able to use cer­tain rules to make or prepa­re decis­i­ons based on pro­ba­bi­li­ties and uncertainties.

While decis­i­on trees gene­ra­te expli­cit rules, neu­ral net­works are an exam­p­le of impli­cit rule gene­ra­ti­on. Neu­ral net­works learn com­plex pat­terns from the data, but the “rules” they app­ly to make pre­dic­tions are hid­den in the weights and acti­va­tions of the neu­rons. Alt­hough the­re are no expli­cit “if-then” rules in neu­ral net­works, the decis­i­ons are still deter­mi­ned by rules lear­ned during the trai­ning process.

The dif­fi­cul­ty with neu­ral net­works is that the rules are often dif­fi­cult to under­stand – they are “black box” models. Recent­ly, howe­ver, the­re has been pro­gress in explainable AI, which aims to reve­al the­se impli­cit rules and make them easier to understand.

This does not yet say how ML pro­ce­eds. Accor­ding to the metho­do­lo­gy of lear­ning, four forms can be distinguished:

  • Super­vi­sed lear­ning (super­vi­sed lear­ning), in which labe­led data records are used.

  • Unsu­per­vi­sed lear­ning (unsu­per­vi­sed lear­ning), in which pat­terns are reco­gnized wit­hout labe­l­ing, for exam­p­le in data mining.

  • Semi-super­vi­sed lear­ning (semi-super­vi­sed lear­ning) as an inter­me­dia­te form in which both labe­led and unla­be­led data are used.

  • Rein­for­cing lear­ning (re-inforce­ment lear­ning), in which lear­ning is rein­forced through inter­ac­tion with the environment.

It is also hel­pful to dif­fe­ren­tia­te bet­ween sym­bo­lic and sub­sym­bo­lic lear­ning. Sym­bo­lic lear­ning is so cal­led becau­se it uses sym­bols and logi­cal rules to repre­sent know­ledge. One exam­p­le is Decis­i­on trees (“decis­i­on trees”): Here, a struc­tu­re of con­di­ti­ons or rules ana­log­ous to a flow­chart is used or gene­ra­ted to draw rule-based con­clu­si­ons from trai­ning data. The struc­tu­re is tree-like becau­se nodes repre­sent decis­i­ons – each node cor­re­sponds to an if/then rule based on a pro­per­ty of the input data. The bran­ches repre­sent the results of app­ly­ing the­se rules, and the lea­ves are the end­points for the result, clas­si­fi­ca­ti­on or pre­dic­tion. Decis­i­on trees the­r­e­fo­re work through defi­ned decis­i­on pro­ce­s­ses. Seve­ral decis­i­on trees can be trai­ned on dif­fe­rent data and then tog­e­ther, for exam­p­le by majo­ri­ty decis­i­on, deli­ver bet­ter results than a sin­gle tree, which tends to overfitting.

Howe­ver, sym­bo­lic lear­ning can reach its limits with lar­ge amounts of data. Sub­sym­bo­lic lear­ning on the other hand, uses raw data that does not need to be con­ver­ted into system-com­pa­ti­ble sym­bols. This approach is bet­ter sui­ted to reco­gnizing com­plex pat­terns in input data, but may be less trans­pa­rent becau­se the com­plex pro­ce­s­ses are more dif­fi­cult to under­stand. When which form of ML is used does not depend on the area of appli­ca­ti­on, but rather on whe­ther rules are alre­a­dy known or are yet to be crea­ted. In the exam­p­le of cre­dit rating, a com­pa­ny can not only work with a decis­i­on tree; it can also try to deter­mi­ne cor­re­la­ti­ons bet­ween los­ses and other fac­tors such as age, place of resi­dence, gen­der, purcha­sing beha­vi­or, hou­se­hold size, etc. using a form of sub-sym­bo­lic ML. The obser­ved cor­re­la­ti­ons can then be used as rules for a decis­i­on tree.

Sub­sym­bo­lic lear­ning inclu­des, for exam­p­le Arti­fi­ci­al neu­ral net­works (and deep lear­ning as a buz­zword for par­ti­cu­lar­ly com­plex net­works → 11).

11 What are neu­ral networks?

Neu­ral net­works are algo­rith­ms that emu­la­te infor­ma­ti­on pro­ce­s­sing in the brain in order to reco­gnize pat­terns in input data. A lar­ge num­ber of con­nec­ted “nodes” are used, which tog­e­ther form “lay­ers” and pro­cess (“weight”) the input data step by step, pos­si­bly over seve­ral or very many lay­ers. In con­trast to decis­i­on trees (→ 10), neu­ral net­works are con­nec­ted in a more com­plex way becau­se each node can be con­nec­ted to seve­ral other nodes in the next layer.

In order for the net­work to be capa­ble of meaningful pro­ce­s­sing, the­se weightin­gs must be set cor­rect­ly. Accor­din­gly, decis­i­on-making in neu­ral net­works takes place as dis­tri­bu­ted and con­ti­nuous pro­ce­s­sing from a recei­ving “Input Lay­er” via inter­po­sed “Hid­den lay­er” to the out­put level, the “Out­put Lay­er”; wher­eby the net­work lear­ns by adju­sting the weights bet­ween the nodes. Decis­i­on trees, on the other hand, work with expli­cit con­di­ti­ons (“if A > X go left, other­wi­se go right”). Each node makes a decis­i­on that leads to a spe­ci­fic branch, which is why the decis­i­on paths are always com­ple­te­ly com­pre­hen­si­ble. Deduc­ti­ve systems are the­r­e­fo­re more of a “white box”, while induc­ti­ve systems are a “black box”.

The fac­tors of the men­tio­ned weight­ing of the nodes in the net­work are impro­ved through trai­ning by com­pa­ring the out­put of the net­work with an expec­ted result. If the­re are devia­ti­ons, the weights are adju­sted using fur­ther trai­ning data – and so on.

This trai­ning can be car­ri­ed out in dif­fe­rent ways:

  • At super­vi­sed lear­ning both input data and the desi­red out­puts are made available to the net­work. The net­work thus lear­ns to map the rela­ti­on­ship bet­ween input data and out­put. An exam­p­le is an input of ani­mal pho­tos when the net­work is simul­ta­neous­ly pro­vi­ded with a data set of appro­pria­te­ly labe­led images of dogs and cats (“labels”). The net­work com­pa­res the pre­dic­tions from the input data with the labels and adjusts the weightin­gs until pre­dic­tion errors are mini­mi­zed. This is a com­mon pro­ce­du­re for data clas­si­fi­ca­ti­on, e.g. for image clas­si­fi­ca­ti­on, spam fil­ters (lear­ning by mar­king e‑mails as spam) or the pre­dic­tion of real estate pri­ces (= the labe­led data) based on infor­ma­ti­on about the size, loca­ti­on and fea­tures of the property.

  • At unsu­per­vi­sed lear­ning the net­work recei­ves input data, but no labels. It must the­r­e­fo­re inde­pendent­ly reco­gnize pat­terns and struc­tures in the data by grou­ping simi­lar data points or redu­cing data to cer­tain rele­vant cha­rac­te­ri­stics. This approach is sui­ta­ble for data explo­ra­ti­on, e.g. for cus­to­mer seg­men­ta­ti­on (grou­ping based on purcha­sing beha­vi­or wit­hout pre­de­fi­ned cate­go­ries), the reco­gni­ti­on of unusu­al tran­sac­tions wit­hout a defi­ni­ti­on of “unusu­al” or the reco­gni­ti­on of topic clu­sters in a lar­ge text collection.

  • Semi-super­vi­sed lear­ning com­bi­nes super­vi­sed and unsu­per­vi­sed lear­ning – both (a few) labe­led and (a lot of) unla­be­led data are used for trai­ning. The labels make it easier to reco­gnize pat­terns. When labe­l­ing data is too time-con­sum­ing, this approach can be useful, for exam­p­le, when a smal­ler num­ber of labe­led X‑ray images are used with a lar­ger amount of unclas­si­fi­ed images to impro­ve dia­gno­stic accu­ra­cy, when clas­si­fi­ed pro­duct reviews are used with unclas­si­fi­ed reviews to deter­mi­ne sen­ti­ment in new reviews (“sen­ti­ment ana­ly­sis”), or in speech reco­gni­ti­on when tran­scri­bed audio recor­dings are com­bi­ned with speech data to impro­ve reco­gni­ti­on accuracy.

  • At rein­force­ment lear­ning (“rein­force­ment lear­ning”), the net­work inter­acts with an envi­ron­ment and “lear­ns” – adjusts weights – through rewards and punish­ments. It is an inter­ac­ti­ve tri­al and error approach that is used, for exam­p­le, to train an agent in games such as chess or Go (lear­ning through repea­ted play), in robot navi­ga­ti­on (lear­ning through navi­ga­ti­on in an envi­ron­ment) or in ener­gy manage­ment (lear­ning by adap­ting power dis­tri­bu­ti­on based on con­sump­ti­on patterns).

Like other ML models, neu­ral net­works also form rules. Howe­ver, the­se rules are not expli­cit, unli­ke in an asso­cia­ti­on ana­ly­sis, for exam­p­le (→ 9). Nor do they want to be – the aim is not to find rules, but to pro­du­ce an out­put that applies rules but does not app­ly them.
(“black box”). For exam­p­le, a decis­i­on tree forms a expli­cit rulewhile neu­ral net­works impli­cit rules The­se rules are hid­den in the acti­va­tions and weightin­gs of the “neu­rons”. The pro­blem with neu­ral net­works is that the rules are often dif­fi­cult to understand.

Howe­ver, the­re are approa­ches to reve­al impli­cit rules. For exam­p­le, “sali­en­cy maps” visua­li­ze which com­pon­ents of the input con­tri­bu­ted most to the decis­i­on (e.g. by high­light­ing the image area that was decisi­ve for the clas­si­fi­ca­ti­on), and “Local Inter­pr­e­ta­ble Model-agno­stic Expl­ana­ti­ons” (LIME) work in a simi­lar way – they use simp­le models such as line­ar regres­si­ons in par­al­lel with the use of the neu­ral net­work and can pro­vi­de com­pre­hen­si­ble expl­ana­ti­ons (e.g. that words such as “free” are decisi­ve for the clas­si­fi­ca­ti­on of an email as spam).

12 What is a Lar­ge Lan­guage Model (LLM)?

A Lar­ge Lan­guage Model (LLM) is based on a neu­ral net­work (→ 11) and “under­stands” lan­guage. Well-known examp­les are the GPT models from Ope­nAI, Gemi­ni from Goog­le, LLaMA from Meta, Clau­de from Anthro­pic, Com­mand from Cohe­re, Grok from X, the models from Mistral, Ernie from Bai­du or Fal­con from the Tech­no­lo­gy Inno­va­ti­on Insti­tu­te in Abu Dhabi.

At Trai­ning of an LLM a distinc­tion can be made bet­ween pri­or pre­pa­ra­ti­on and the actu­al training.

Within the frame­work of the Prepro­ce­s­sing the trai­ning data (e.g. texts from books, web­sites, forums, Wiki­pe­dia, etc., now also based on cor­re­spon­ding licen­ses from major publishers such as the NY Times; for trai­ning → 36) are clea­ned up. For exam­p­le, irrele­vant or incor­rect con­tent or spam is remo­ved, in some cases super­fluous sym­bols and stop words such as “the”, “the” etc.).

A “toke­ni­zer” then breaks texts down into smal­ler units (the Tokens), depen­ding on whe­ther it is a word, a sin­gle cha­rac­ter or a word com­po­nent. The lat­ter applies to Ope­nAI, for exam­p­le, whe­re a vari­ant of “byte-pair enco­ding” is used, in which the most fre­quent cha­rac­ter pairs are com­bi­ned into new tokens based on indi­vi­du­al cha­rac­ters, as a result of which the voca­bu­la­ry grows suc­ce­s­si­ve­ly and more fre­quent words or com­pon­ents are used as a who­le. Hom­onyms such as “bank” can be stored as seve­ral tokens depen­ding on the con­text (“money in the bank”, “sit­ting on the bank”).

Howe­ver, the tokens have no signi­fi­can­ce on their own – they are only inte­re­st­ing in their Rela­ti­on­ship to other tokens. The­se rela­ti­on­ships emer­ge from the input data during trai­ning and can be con­cep­tual­ly expres­sed as pro­xi­mi­ty or distance values. For exam­p­le, the word “hou­se” has a grea­ter pro­xi­mi­ty to the word “roof” than the word “dama­ge”, the token “big” has a grea­ter pro­xi­mi­ty to “kind”, etc. Cor­re­spon­ding values are the­r­e­fo­re assi­gned to each token. The­se values are the Vec­torsIn gene­ral, a vec­tor is an orde­red list of num­bers arran­ged with a cer­tain dimen­sio­na­li­ty in a cer­tain order. In the con­text of an LLM, a vec­tor is the value of a token in rela­ti­on to other tokens. The lear­ned vec­tors are cal­led “Embed­ding” – embed­dings are the­r­e­fo­re an expres­si­on of the struc­tu­re or pro­per­ties of data.

Dimen­sio­na­li­ty” means the num­ber of nume­ri­cal values of the vec­tor. The­se num­bers express the pro­per­ties of a token. A vec­tor with a dimen­sio­na­li­ty of 768 the­r­e­fo­re means a series of 768 num­bers, each repre­sen­ting a spe­ci­fic lear­ned fea­ture. The hig­her the dimen­sio­na­li­ty, the finer the recor­ded dif­fe­ren­ces in mea­ning. The GPT‑3 model from Ope­nAI has a dimen­sio­na­li­ty of 768 to 12,288, depen­ding on the vari­ant. The value for GPT‑4 is not known, but is pre­su­ma­b­ly simi­lar. Each token the­r­e­fo­re recei­ves up to 12,288 pro­per­ties during training.

Trai­ned models can then be fur­ther trai­ned for spe­ci­fic appli­ca­ti­on are­as on a spe­ci­fic, smal­ler data set (“Fine tuning”), e.g. through medi­cal data, tech­ni­cal docu­men­ta­ti­on, legal texts or mate­ri­al from a spe­ci­fic com­pa­ny. The model is trai­ned on this data in such a way that it refi­nes the skills it has lear­ned wit­hout unlear­ning them. The para­me­ters of the model are easi­ly adapt­ed – for exam­p­le, the model lear­ns tech­ni­cal terms, cer­tain for­mu­la­ti­ons or typi­cal sen­tence struc­tures. One exam­p­le is the EDÖ­Bot from daten­recht (https://edoebot.datenrecht.ch/), which is based on a model from Ope­nAI but has been fur­ther trai­ned with data pro­tec­tion material.

The per­for­mance can also be mea­su­red by “Retrie­val-Aug­men­ted Gene­ra­ti­on” (“RAG”) can be impro­ved. Here, an LLM is com­bi­ned with exter­nal sources of infor­ma­ti­on, i.e. infor­ma­ti­on out­side the model is inclu­ded in the query, e.g. more up-to-date or more spe­ci­fic infor­ma­ti­on that was not lear­ned in the trai­ning. A search com­po­nent (“Retrie­ver”) sear­ches an exter­nal data­ba­se for rele­vant data during the query, the Gene­ra­tor uses this data to pro­vi­de a bet­ter respon­se. This is also used by the FDPIC bot, which can, for exam­p­le, access the dis­patch on the cur­rent DPA or the FDPIC’s guidelines.

Basic que­sti­ons

13 What is an “AI system” (AI System, AIS)?

During the nego­tia­ti­ons (→ 4), this cen­tral point – which deter­mi­nes the mate­ri­al appli­ca­bi­li­ty of the AEOI – was one of the par­ti­cu­lar­ly con­ten­tious issues, and it can­not be said that the out­co­me was a suc­cess. The Commission’s draft of April 2021 (https://dtn.re/dzZqxl)angelehnte Definition.

The AIA now defi­nes an AIS as fol­lows (Art. 3 No. 1 and Reci­tal 12):

AI system” means a machi­ne-based system that is desi­gned to ope­ra­te with vary­ing degrees of auto­no­my and that, once ope­ra­tio­nal, can be adap­ti­ve and that deri­ves from inputs recei­ved for expli­cit or impli­cit goals how to pro­du­ce out­puts such as pre­dic­tions, con­tent, recom­men­da­ti­ons or decis­i­ons that can affect phy­si­cal or vir­tu­al environments;

So it’s about

  • a machi­ne-based system” (i.e. not a bio­lo­gi­cal system, for exam­p­le – trans­plan­ting a brain would the­r­e­fo­re not be pla­cing an AIS on the market),

  • desi­gned for vary­ing degrees of auto­no­mous ope­ra­ti­on and

  • which can be adap­ta­ble once it is ope­ra­tio­nal and

  • that deri­ves from the inputs recei­ved for expli­cit or impli­cit goals how to crea­te out­puts such as pre­dic­tions, con­tent, recom­men­da­ti­ons or decis­i­ons that can influence phy­si­cal or vir­tu­al environments”.

As a result, two ele­ments are decisi­ve, but they ulti­m­ate­ly mer­ge into one:

  • The system is desi­gned for a auto­no­mous ope­ra­ti­on inter­pre­ted. Accor­ding to Reci­tal 12, this means that it is not “based exclu­si­ve­ly on rules defi­ned by natu­ral per­sons for the auto­ma­tic exe­cu­ti­on of ope­ra­ti­ons”, but that it “acts to a cer­tain ext­ent inde­pendent­ly of human inter­ven­ti­on and is capa­ble of ope­ra­ting wit­hout human inter­ven­ti­on”; and

  • it can deri­ve an out­put from input, wher­eby this is not just any deri­va­ti­on, but “lear­ning, rea­so­ning and mode­ling pro­ce­s­ses” (Reci­tal 12; “Infe­rence”).

Howe­ver, this lea­ves open the que­sti­on of what is meant by the neces­sa­ry auto­no­my in the company.

The basis for an AIS is pri­ma­ri­ly Machi­ne Lear­ning (ML; Q10) (Reci­tal 12: “machi­ne lear­ning, wher­eby data is used to learn how cer­tain goals can be achie­ved”). It would be obvious to use the afo­re­men­tio­ned distinc­tion bet­ween deduc­ti­ve and induc­ti­ve models (→ 10) and to under­stand AIS as ML which, in con­trast to deter­mi­ni­stic sta­tis­ti­cal models, does not pro­ce­ed deduc­tively, i.e. which does not app­ly pre­de­fi­ned rules, or not only pre­de­fi­ned rules, but defi­nes rules or at least lear­ns to weight pre­de­fi­ned para­me­ters. An AIS would the­r­e­fo­re be, for exam­p­le, a model that lear­ns from trai­ning data how stron­gly the land area as a given para­me­ter affects real estate pri­ces, and no AIS would be a model that applies defi­ned para­me­ters and weightin­gs to new data – e.g. a simp­le Excel with a cor­re­spon­ding formula.

Howe­ver, the distinc­tion is not that clear. Accor­ding to reci­tal 12, the AIA also covers “logic- and know­ledge-based con­cepts deri­ved from coded infor­ma­ti­on or sym­bo­lic repre­sen­ta­ti­ons of the task to be sol­ved” as AIS. This applies to the exam­p­le men­tio­ned abo­ve: The Excel for cal­cu­la­ting real estate pri­ces is a logic- and know­ledge-based con­cept that makes deduc­tions from the coded task (the Excel for­mu­la) (cal­cu­la­tes the real estate pri­ces depen­ding on the input data). Whe­ther this con­cept, i.e. the Excel for­mu­la, is based on trai­ning is in its­elf irrele­vant becau­se Excel does not learn during use. If only the distinc­tion bet­ween deduc­ti­ve and induc­ti­ve approa­ches were made, all the­se systems would fall out­side the definition.

In any case, it can­not be a que­sti­on of the model during ope­ra­ti­onafter com­mis­sio­ning, for two rea­sons: First, the adap­ta­bi­li­ty ele­ment is not man­da­to­ry under the wor­ding of the pro­vi­si­on, but illu­stra­ti­ve. Second­ly, trai­ned models would other­wi­se not be cover­ed by the AIA, and this applies to the vast majo­ri­ty of systems used, inclu­ding wide­ly used LLMs, which is of cour­se not the inten­ti­on. Howe­ver, a trai­ned system is not real­ly auto­no­mous in ope­ra­ti­on – it pro­ce­s­ses the input data accor­ding to its para­me­ters, which may have been lear­ned in a trai­ning pha­se but, as men­tio­ned, no lon­ger chan­ge (until an update and sub­ject to the excep­tio­nal case that a system con­ti­nues to be trai­ned in ope­ra­ti­on, as may be the case, for exam­p­le, with anti-fraud systems). From this per­spec­ti­ve, most systems are deter­mi­ni­stic, not autonomous.

You can­not rely on the Deve­lo­p­ment pha­se look. During deve­lo­p­ment, a model can learn, and becau­se the goal of the lear­ning pro­cess is only descri­bed func­tion­al­ly (e.g. relia­ble clas­si­fi­ca­ti­on of images, gene­ra­ti­on of a meaningful text), but not tech­ni­cal­ly, the lear­ning pro­cess is not deter­mi­ned on a tech­ni­cal level (how the para­me­ters are to be set so that the lear­ning goal is achie­ved is not spe­ci­fi­ed, hence the trai­ning). Howe­ver, the wor­ding of Art. 3 No. 1 expli­ci­t­ly refers to “ope­ra­ti­on” and not trai­ning – trai­ning is addres­sed in the AI Act (→ 36), but not in the defi­ni­ti­on of the AI system, and unli­ke test­ing, it is not man­da­to­ry. The requi­red auto­no­my can the­r­e­fo­re not only be sought in training.

This still lea­ves open what is meant. The OECD published an accom­pany­ing memo­ran­dum for its par­al­lel defi­ni­ti­on of the “AI system” in March 2024, which is some­what clea­rer on the requi­red autonomy:

AI system auto­no­my (con­tai­ned in both the ori­gi­nal and the revi­sed defi­ni­ti­on of an AI system) means the degree to which a system can learn or act wit­hout human invol­vement fol­lo­wing the dele­ga­ti­on of auto­no­my and pro­cess auto­ma­ti­on by humans. Human super­vi­si­on can occur at any stage of the AI system life­cy­cle, such as during AI system design, data coll­ec­tion and pro­ce­s­sing, deve­lo­p­ment, veri­fi­ca­ti­on, vali­da­ti­on, deployment, or ope­ra­ti­on and moni­to­ring. Some AI systems can gene­ra­te out­puts wit­hout the­se out­puts being expli­ci­t­ly descri­bed in the AI system’s objec­ti­ve and wit­hout spe­ci­fic ins­truc­tions from a human.

Auto­no­my in ope­ra­ti­on the­r­e­fo­re does not refer to the func­tion of the system as such, which, as noted, is usual­ly deter­mi­ned, but to what it does with input data: A system is auto­no­mous if it can work accor­ding to the input wit­hout human inter­ven­ti­on and gene­ra­tes an out­put that is not expli­ci­t­ly pre­de­ter­mi­ned. The non-deter­mi­ni­stic aspect is the­r­e­fo­re to be found in data pro­ce­s­sing and refers to the rela­ti­on­ship bet­ween input and out­put.

It can be argued that the­re is no real auto­no­my here eit­her. If the system is trai­ned, its data pro­ce­s­sing is deter­mi­ned by the para­me­ters of the system. The same input must gene­ra­te the same out­put, unless a ran­dom func­tion is built in. This is often the case, e.g. in the Ope­nAI models (this fac­tor can be con­trol­led to a cer­tain ext­ent with the tem­pe­ra­tu­re set­ting), but even a ran­dom gene­ra­tor is basi­cal­ly deter­mi­ned (non-deter­mi­ni­stic gene­ra­tors pro­vi­de dif­fe­rent values under the same initi­al con­di­ti­ons, but becau­se the soft­ware is deter­mi­ni­stic in its­elf, an exter­nal fac­tor such as radio­ac­ti­ve decay must be inclu­ded for ran­do­mizati­on, and this fac­tor obeys natu­ral laws).

Howe­ver, the AIA is a law with a pur­po­se and not a natu­ral phi­lo­so­phi­cal con­side­ra­ti­on. Accor­din­gly, it must be inter­pre­ted func­tion­al­ly, in par­ti­cu­lar with regard to the legal con­se­quen­ces that should cover the cir­cum­stances for which it is desi­gned. As an inte­rim result, the neces­sa­ry auto­no­my must the­r­e­fo­re be defi­ned at the level of the Data pro­ce­s­sing from the input to the out­put, and this must be such that the result, in a nor­mal human view does not appear to be deter­mi­ned.

Ulti­m­ate­ly, this is a form of the Turing test (→ 6): The AIA records a system as an AI system if it looks like AI. The rule of thumb of the Austri­an data pro­tec­tion aut­ho­ri­ty also goes in this direc­tion (→ 1):

Put sim­ply, the­se are com­pu­ter systems that can per­form tasks, that nor­mal­ly requi­re human intel­li­gence. This means that the­se systems can sol­ve pro­blems, learn, make decis­i­ons and inter­act with their envi­ron­ment in a simi­lar way to humans.

An AIS is the­r­e­fo­re a system which, when in ope­ra­ti­on, gene­ra­tes an out­put from a varie­ty of dif­fe­rent sources, a prio­ri given opti­ons, wit­hout the sel­ec­tion being purely ran­dom and wit­hout fol­lo­wing direct human gui­dance, and which the­r­e­fo­re ful­fills a task in which a per­son would have to think. This also explains the distinc­tion from a deter­mi­ned system: a per­son who is told in detail how to pro­ce­ed no lon­ger has to think. Accor­din­gly, it will not be pos­si­ble to say with cer­tain­ty whe­ther all systems fall under the AIA or not.

AIS are for example:

  • Chat­bots

  • Recom­men­da­ti­on systems for strea­ming services

  • Voice assi­stants that learn through user interaction

  • auto­no­mous vehic­les that adapt their dri­ving style based on sen­sor and envi­ron­men­tal data

  • Facial reco­gni­ti­on systems who­se accu­ra­cy is impro­ved through use

  • ML-based trans­la­ti­on tools

  • Fraud detec­tion systems at banks that learn to reco­gnize sus­pi­cious patterns

  • Dia­gno­stic systems in the heal­th­ca­re sector

  • per­so­na­li­zed lear­ning plat­forms (even if they gene­ra­te repe­ti­ti­on inter­vals based on lear­ning success)

  • Spam fil­ter

A non-deter­mi­ni­stic approach remains a pre­re­qui­si­te in each case. Howe­ver, pure if/then logic is not suf­fi­ci­ent, e.g. a music strea­ming ser­vice sug­gests Mega­deth to all Metal­li­ca listen­ers – such logics are deter­mi­ni­stic, a lear­ning or infe­rence com­po­nent is miss­ing (assum­ing that the system has not come across this cor­re­la­ti­on itself).

No AIS are for example:

  • Excel cal­cu­la­ti­ons, but with the pro­vi­so that an Excel docu­ment could also be pro­grammed into an AIS

  • Data­ba­ses such as MySQL that pro­vi­de infor­ma­ti­on on request

  • Image pro­ce­s­sing soft­ware, inso­far as it is deter­mi­ni­stic, i.e. does not gene­ra­te images and is not based on an LLM

  • Mail cli­ents that move emails to fol­ders accor­ding to fixed rules

  • the brow­ser with which ChatGPT is used

  • Spam fil­ter based sole­ly on white/black lists

  • deter­mi­ni­stic soft­ware gene­ra­ted by or with the help of AIS (this is likely to affect a lar­ge pro­por­ti­on of soft­ware today if deve­lo­p­ment is AI-sup­port­ed, e.g. when using Git­hub Copilot)

Various other appli­ca­ti­on examp­les can be found in the Algo­rithm Watch atlas (https://dtn.re/ggJqKy).

An AI model is also not an AIS, i.e. basic tech­no­lo­gy that has not yet been applied to any area of appli­ca­ti­on (→ 39).

AIS can a pro­duct yours­elf (e.g. an AIS to assess the sui­ta­bi­li­ty of job appli­cants), or it can be used as a “embedded system” or “embedded AI” can be part of ano­ther pro­duct (e.g. a con­trol system). In the case of con­trol systems, the cor­re­spon­ding pro­duct the­r­e­fo­re does not beco­me an AIS as a who­le, as fol­lows from Art. 6 para. 1 and Art. 25 para. 3 – it remains sub­ject to the cor­re­spon­ding pro­duct regu­la­ti­ons, but the “embedded AIS” beco­mes an HRAIS through instal­la­ti­on, pro­vi­ded the pro­duct falls under Annex 1 (→ 28). Only when the pro­duct manu­fac­tu­rer makes the AIS com­po­nent available on the mar­ket or puts it into ope­ra­ti­on in its own name does it beco­me the pro­vi­der of the HRAIS (Art. 25 para. 3). In the con­for­mi­ty assess­ment, the con­trol system will nevert­hel­ess have to be asses­sed in the con­text of the over­all system. For other AIS, on the other hand, a divi­si­on is only pos­si­ble if the AI com­po­nent can be cle­ar­ly distin­gu­is­hed from other com­pon­ents (e.g. in a recrui­ting system that cle­ar­ly sepa­ra­tes an AI modu­le for appli­cant ran­king from the manage­ment of the appli­ca­ti­ons considered).

The qua­li­fi­ca­ti­on as AIS as such does not say anything about the asso­cia­ted risk – not least becau­se the risks do not ari­se from the tech­no­lo­gy, but from the con­di­ti­ons of its use. The AI Act divi­des AIS use cases into four cate­go­ries, even if not express­ly ver­ba­li­zed (→ 16). The AIS also reco­gnizes the GPAI, the regu­la­ti­on of which was a piè­ce de rési­stance during the nego­tia­ti­ons (→ 39 ff.).

14. are all AI systems cover­ed by the AIA?

No. First of all, the EU can only regu­la­te within its man­da­te, i.e. only acti­vi­ties within the scope of Uni­on law. This exclu­des acti­vi­ties of Mem­ber Sta­tes that affect natio­nal secu­ri­ty. Cer­tain AI systems are then exclu­ded from the AEOI (Art. 2):

  • AIS, which exclu­si­ve­ly mili­ta­ry pur­po­ses and the natio­nal secu­ri­ty (Art. 2 para. 3), wher­eby the AEOI incor­po­ra­tes the limits of EU law;

  • AIS, which is used exclu­si­ve­ly for Rese­arch pur­po­ses is deve­lo­ped and used (Art. 2 para. 6) so that the free­dom of rese­arch is not impai­red (AIS who­se pos­si­ble uses only include rese­arch are, howe­ver, cover­ed by the AIA; Reci­tal 23);

  • AIS, the Pri­va­te indi­vi­du­als for non-com­mer­cial pur­po­ses (Art. 2 para. 10; e.g. the pri­va­te use of ChatGPT for plan­ning a wed­ding reception);

  • FOSS (Art. 2 para. 12), i.e. free and open source soft­ware (or models), pro­vi­ded that open dis­tri­bu­ti­on is per­mit­ted and users may use, modi­fy and redis­tri­bu­te the model free of char­ge, and with the pro­vi­so that FOSS remains cover­ed if it is a HRAIS (→ 28), if it or its use con­sti­tu­tes a pro­hi­bi­ted prac­ti­ce (→ 27) or if it inter­acts direct­ly with users or is used for the gene­ra­ti­on of con­tent (Art. 50 → 37);

  • AIS during the Rese­arch, test­ing and deve­lo­p­ment pha­se befo­re pla­cing on the mar­ket or put­ting into ser­vice, except for tests under real con­di­ti­ons (Art. 2 para. 8). Howe­ver, pro­vi­ders of AIS must of cour­se also com­ply with the requi­re­ments during the­se pha­ses, or rather prepa­re for compliance.

15 What is the gene­ral regu­la­to­ry approach of the AEOI?

Despi­te its name, the AIA is neither a com­pre­hen­si­ve regu­la­ti­on of arti­fi­ci­al intel­li­gence nor mar­ket con­duct law, but pro­duct safe­ty law. It is based on the estab­lished prin­ci­ples of pro­duct regu­la­ti­on in the Euro­pean sin­gle mar­ket, par­ti­cu­lar­ly in the “New Approach” regulations.

The “New Approach” (or “New Approach”; see Com­mis­si­on Com­mu­ni­ca­ti­on COM(2003)0240 of 2003, https://dtn.re/0mGegd)) is a con­cept intro­du­ced by the EU in the 1980s to regu­la­te the inter­nal mar­ket: instead of issuing detail­ed tech­ni­cal regu­la­ti­ons, the EU defi­nes basic requi­re­ments for pro­ducts as a pre­re­qui­si­te for mar­ket access. More detail­ed requi­re­ments are then deve­lo­ped by Euro­pean stan­dar­dizati­on orga­nizati­ons (e.g. CEN, CENELEC or ETSI). The­se stan­dards are not man­da­to­ry, but com­pli­ance with them estab­lishes the pre­sump­ti­on of con­for­mi­ty of the cor­re­spon­ding pro­ducts (in the AIA: Art. 40.

Pro­of of con­for­mi­ty is then pro­vi­ded in the Con­for­mi­ty assess­ment pro­ce­du­rewhich the manu­fac­tu­rer car­ri­es out hims­elf (self-cer­ti­fi­ca­ti­on) or has car­ri­ed out by an inde­pen­dent noti­fi­ed body. This assess­ment must be car­ri­ed out befo­re the pro­duct – i.e. the AIS – is pla­ced on the mar­ket, i.e. befo­re the risk of an AIS can mani­fest itself.

The CE mark indi­ca­tes that the manu­fac­tu­rer has checked the con­for­mi­ty of the pro­duct, that the appli­ca­ble con­for­mi­ty assess­ment pro­ce­du­re has been com­ple­ted and that the requi­re­ments have been met. Fur­ther infor­ma­ti­on can be found in the Blue Gui­de of the EU Com­mis­si­on, the Gui­de for the imple­men­ta­ti­on of the EU 2022 pro­duct regu­la­ti­ons of June 29, 2022 (https://dtn.re/hrqXlb).

The AI Act takes up this approach, but with a few spe­cial features:

  • The AIA does not regu­la­te a tech­no­lo­gy, but its use. Howe­ver, it requi­res com­pli­ance with basic requi­re­ments to all HRAIS, in accordance with Art. 8 – 15. Spe­ci­fic use cases are defi­ned by sel­ec­ti­ve bans (→ 27) and by the cri­te­ria for clas­si­fi­ca­ti­on as HRAIS (→ 28).

  • The allo­ca­ti­on of obli­ga­ti­ons is based on the dif­fe­rent roles of the actors along the value chain (→ 20 ff.). What the “pro­du­cers” are in the New Approach, the pro­vi­ders are in the AIA, and the “users” are the operators.

  • In prin­ci­ple, the pro­vi­der (→ 20) of the HRAIS must have a Con­for­mi­ty pro­ce­du­re unless an excep­ti­on applies due to spe­cial public inte­rests (Art. 16 lit. f and Art. 46). The con­for­mi­ty assess­ment pro­ce­du­re is spe­ci­fi­ed in Art. 43. For HRAIS in the area of bio­me­trics (Annex III No. 1), the pro­vi­der can choo­se whe­ther to car­ry out self-cer­ti­fi­ca­ti­on (inter­nal pro­ce­du­rewhich sets out Annex VI) or a noti­fi­ed body (Art. 29 ff. → 56) (Exter­nal pro­ce­du­rewhich is set out in Annex VII).

For self-cer­ti­fi­ca­ti­on to be per­mis­si­ble, har­mo­ni­zed stan­dards (Art. 40) or com­mon spe­ci­fi­ca­ti­ons (Art. 41), i.e. har­mo­ni­zed spe­ci­fi­ca­ti­ons of the essen­ti­al requi­re­ments or their imple­men­ta­ti­on, must be available for all aspects of the HRAIS. If the­se are miss­ing, the pro­vi­der can only go through a noti­fi­ed body (Art. 43). For the other high-risk use cases accor­ding to Annex III, the self-cer­ti­fi­ca­ti­on pro­ce­du­re gene­ral­ly applies (Art. 43 para. 2), and for HRAIS that fall under pro­duct regu­la­ti­on accor­ding to Annex I Sec­tion A (e.g. medi­cal devices), the appli­ca­ble pro­ce­du­re also applies for the con­for­mi­ty assess­ment accor­ding to the AEOI (Art. 43 para. 3).

  • For each HRAIS, the pro­vi­der must EU Decla­ra­ti­on of Con­for­mi­ty and keep it for 10 years after the HRAIS has been pla­ced on the mar­ket or put into ser­vice for the atten­ti­on of the aut­ho­ri­ties (Art. 16 lit. g and Art. 47). With the decla­ra­ti­on of con­for­mi­ty, he expres­ses that the HRAIS com­plies with the rele­vant requi­re­ments and that he is respon­si­ble for it (Art. 47 para. 2 and 4). The decla­ra­ti­on of con­for­mi­ty must con­tain the infor­ma­ti­on spe­ci­fi­ed in Annex V and be trans­la­ted into a lan­guage that is “easi­ly under­stan­da­ble” for the com­pe­tent natio­nal aut­ho­ri­ties (Art. 47 para. 2).

  • The pro­vi­der must CE mark (Art. 16 lit. h and Art. 48). By doing so, he indi­ca­tes that he assu­mes respon­si­bi­li­ty for con­for­mi­ty with the requi­re­ments of the AIA and any other appli­ca­ble pro­duct requi­re­ments (Art. 30 of the Mar­ket Sur­veil­lan­ce Ordi­nan­ce, https://dtn.re/h4EI0Y).

  • Pla­cing on the mar­ket and put­ting into ser­vice are not per­mit­ted until the con­for­mi­ty assess­ment has been com­ple­ted, and a new con­for­mi­ty assess­ment is requi­red if the HRAIS is sub­stan­ti­al­ly modi­fi­ed (Art. 43 para. 5).

  • Inso­far as a pro­vi­der sec­tor-spe­ci­fic pro­duct regu­la­ti­on, the requi­re­ments of the AEOI must gene­ral­ly be cover­ed within the cor­re­spon­din­gly spe­ci­fi­ed framework.

  • In addi­ti­on, HRAIS must be published in a public Data­ba­se be regi­stered (Art. 49).

HRAIS are not pro­hi­bi­ted – in this respect, the AIA is very inno­va­ti­on-fri­end­ly. Only a few are­as of appli­ca­ti­on or use cases that have been asses­sed as par­ti­cu­lar­ly unde­si­ra­ble for socie­ty are pro­hi­bi­ted (→ 27).

Con­ver­se­ly, howe­ver, par­al­lel requi­re­ments, con­di­ti­ons and rest­ric­tions must be obser­ved, e.g. tho­se rela­ting to data pro­tec­tion, fair tra­ding, labor or intellec­tu­al pro­per­ty law. The AEOI con­ta­ins hard­ly any per­mis­si­ons in this regard, with one excep­ti­on for data pro­tec­tion (→ 1).

16 How are risks cate­go­ri­zed in the AIA?

The AIA distin­gu­is­hes bet­ween dif­fe­rent levels or clas­ses of risk in regu­la­ti­on. The decisi­ve fac­tor here is pri­ma­ri­ly the spe­ci­fic use of an AIS and not its tech­ni­cal cha­rac­te­ri­stics as such or the data used for trai­ning or during use or other cri­te­ria that could also be used for risk clas­si­fi­ca­ti­on. This dif­fe­ren­tia­ti­on makes sen­se in prin­ci­ple; howe­ver, it is rather rough and can­not always do justi­ce to the spe­ci­fic cir­cum­stances, ana­log­ous to the legal clas­si­fi­ca­ti­on of cer­tain per­so­nal data as par­ti­cu­lar­ly wort­hy of pro­tec­tion. The AEOI reco­gnizes four risk levels for AIS: unac­cep­ta­ble risk, high risk, limi­t­ed risk or trans­pa­ren­cy risk and ever­ything else:

  • For­bidden AISAIS or use cases with unac­cep­ta­ble risks are gene­ral­ly pro­hi­bi­ted as a “pro­hi­bi­ted prac­ti­ce” (Art. 5 → 27).

  • HRAISAIS or use cases in sen­si­ti­ve are­as such as cri­ti­cal infras­truc­tu­re, edu­ca­ti­on, employment, essen­ti­al public ser­vices or law enforce­ment; they are sub­ject to the requi­re­ments that make up the main part of the AIA. Art. 6 regu­la­tes the clas­si­fi­ca­ti­on of an AIS as HRAIS (→ 28).

  • AIS with trans­pa­ren­cy risksThe­se are AIS that are not HRAIS but are inten­ded for direct inter­ac­tion with natu­ral per­sons, that gene­ra­te con­tent or that are inten­ded for emo­ti­on reco­gni­ti­on or bio­me­tric cate­go­rizati­on (Art. 50 → 37). Limi­t­ed requi­re­ments app­ly here, which are pri­ma­ri­ly aimed at transparency.

  • Other AISFor all other AIS, the AIS only con­ta­ins mar­gi­nal spe­ci­fi­ca­ti­ons (→ 38).

The obli­ga­ti­ons of a risk class also app­ly to the hig­her classes.

Pri­ma vista, the AIA defi­nes a fifth risk cate­go­ry: AIS that “pose a risk” accor­ding to Art. 79. The­se are AIS with spe­cial risks accor­ding to Art. 3 No. 19 of the Mar­ket Sur­veil­lan­ce Ordi­nan­ce (https://dtn.re/JgakBQ)) i.e. aty­pi­cal­ly increa­sed risks to health or safe­ty or fun­da­men­tal rights. It does not have to be an HRAIS, even if this should gene­ral­ly be the case. If a mar­ket sur­veil­lan­ce aut­ho­ri­ty (→ 43) has rea­son to belie­ve that such risks exist, it exami­nes the AIS in que­sti­on and – if the assump­ti­on is con­firm­ed – informs the com­pe­tent natio­nal aut­ho­ri­ties. Ope­ra­tors also have spe­cial obli­ga­ti­ons with such a system, but now only if it is an HRAIS.

Howe­ver, the requi­re­ments for such AIS do not increa­se mate­ri­al­ly; it is only a mat­ter of a spe­cial check and, if neces­sa­ry, the enforce­ment of com­pli­ance. Such AIS the­r­e­fo­re do not form a sepa­ra­te risk cate­go­ry, and unless they are also an HRAIS, which is likely to be the case in most cases, the­re are hard­ly any requirements.

GPAIMs do not fall into the­se risk clas­ses becau­se they do not have a spe­ci­fic area of appli­ca­ti­on that could be clas­si­fi­ed accor­din­gly. Only when they beco­me a GPAIS do they fall into a risk class as an AIS.

17 Which roles are defi­ned in the AIA?

The AIA defi­nes seve­ral roles that ent­ail dif­fe­rent duties and respon­si­bi­li­ties in rela­ti­on to AIS – and in some cases also to GPAI. It fol­lows the stan­dard of Euro­pean pro­duct safe­ty law with the distinc­tion bet­ween sup­plier, ope­ra­tor, importer and dis­tri­bu­tor, but also reco­gnizes roles:

  • Pro­vi­der (Provider/AIS and GPAI): The enti­ty (i.e. the natu­ral or legal per­son) that places an AIS on the mar­ket and bears the main respon­si­bi­li­ty for com­pli­ance with the requi­re­ments (→ 20);

  • Ope­ra­tor (Deployer/AIS): The loca­ti­on that deploys an AIS or a GPAI (→ 21);

  • Importer (Importer/AIS): The enti­ty that imports an AIS or a GPAI of a third coun­try pro­vi­der into the EU for the first time (→ 23);

  • Retail­er (Distributor/AIS): The enti­ty that offers an AIS on the Com­mu­ni­ty mar­ket wit­hout being a sup­plier or importer its­elf (→ 24);

  • Pro­duct manu­fac­tu­rer (Pro­duct Manufacturer/AIS): The enti­ty that manu­fac­tures a pro­duct in which an AIS is installed;

  • Aut­ho­ri­zed repre­sen­ta­ti­ve (Repre­sen­ta­ti­ve): Accor­ding to Art. 3 No. 5, this is a body in the EU that has been aut­ho­ri­zed in wri­ting by the pro­vi­der to ful­fil the obli­ga­ti­ons set out in this Regu­la­ti­on or to car­ry out pro­ce­du­res on its behalf. Repre­sen­ta­ti­ves have the moni­to­ring and coope­ra­ti­on obli­ga­ti­ons under Art. 22.

  • Per­son con­cer­nedThe data sub­ject is not legal­ly defi­ned, but it con­cerns per­sons who­se data is pro­ce­s­sed by an AIS. They have cer­tain rights under the AIA (in addi­ti­on to the rights under the GDPR).

If an enti­ty has seve­ral roles at the same time, the requi­re­ments app­ly cumu­la­tively in each case (Reci­tal 83). Reci­tal 83 gives the exam­p­le of a trader who is also an importer, but this is exclu­ded by the legal defi­ni­ti­ons (a trader pro­vi­des an AIS, “with the excep­ti­on of the sup­plier or the importer”; Art. 3 No. 7). More obvious is the pro­vi­der who puts his AIS into ope­ra­ti­on and is then also the operator.

In addi­ti­on, the AIA defi­nes the “Actor” (ope­ra­tor); this is a gene­ric term for sup­pliers, pro­duct manu­fac­tu­r­ers, ope­ra­tors, aut­ho­ri­zed repre­sen­ta­ti­ves, importers and dis­tri­bu­tors (Art. 3 No. 8). It is not often used in the AIA, usual­ly only for ease of refe­rence and wit­hout defi­ning legal con­se­quen­ces for operators.

18 What is the ter­ri­to­ri­al scope of the AIA?

The AEOI is initi­al­ly appli­ca­ble in the EU. Howe­ver, it will be EEA-The AEOI will then also app­ly to Nor­way, Ice­land and Liech­ten­stein. The AEOI is curr­ent­ly being exami­ned in the EEA (https://dtn.re/LxZNyE)It will only be for­mal­ly adopted into EEA law fol­lo­wing a decis­i­on by the Draft Joint Committee.

Like the GDPR, the AI Act aims to estab­lish a cer­tain basic pro­tec­tion and a level play­ing field within the EEA (Reci­tal 22). It must the­r­e­fo­re also cover cer­tain cases with an inter-regio­nal com­po­nent. The AIA distin­gu­is­hes bet­ween the indi­vi­du­al roles in the value chain, which is why → 17 was prefixed.

Accor­ding to Art. 2 and 3 (both pro­vi­si­ons tog­e­ther are decisi­ve for the scope of appli­ca­ti­on), it applies as fol­lows from a geo­gra­phi­cal and per­so­nal perspective:

  • For Pro­vi­der (Pro­vi­der):

  • regard­less of the loca­ti­on of the pro­vi­der, when an AIS or a GPAIM is pla­ced on the mar­ket or put into ope­ra­ti­on in the EU (Art. 2 para. 1 lit. a); and

  • if the out­put of the system is used in the EU (lit. c → 19);

  • for Ope­ra­tor (Deployer):

  • if the deployer is estab­lished in the EU or is loca­ted in the EU (lit. b). “Estab­lish­ment” is likely to be inter­pre­ted broad­ly in line with the GDPR;

  • if the out­put of the system is used in the EU (again lit. c);

  • for Importer (Importer): if he is estab­lished in the EU and imports an AIS (Art. 3 No. 6);

  • for Retail­er (Dis­tri­bu­tor): if the AIS is made available on the EU mar­ket, regard­less of the loca­ti­on of the dis­tri­bu­tor (Art. 3 No. 7);

  • for Pro­duct manu­fac­tu­rer (Manu­fac­tu­rer): if they place an AIS on the mar­ket or put it into ser­vice in the EU tog­e­ther with their pro­duct in their own name (Art. 2 para. 1 lit. e);

  • for (EU)Repre­sen­ta­ti­ve for­eign pro­vi­ders (Art. 2 para. 1 lit. f);

  • for affec­ted per­sons in the EU (Art. 2 para. 1 lit. g).

A Swiss com­pa­ny can the­r­e­fo­re fall under the AEOI in par­ti­cu­lar if it:

  • sells an AIS in or into the EU (as deve­lo­per, importer or trader),

  • ano­ther pro­duct sold in the EU that uses an AIS as a component,

  • out­put that is used in the EU (→ 19).

19 What does “Out­put is used in the EU” mean?

Out­put is descri­bed in Art. 2 para. 1 lit. c:

(c) pro­vi­ders and ope­ra­tors of AI-systems estab­lished or loca­ted in a third coun­try whe­re the out­put pro­du­ced by the AI-system is used in the Union;

This cer­tain­ly inclu­des AI-gene­ra­ted text or an image. Howe­ver, the AIA does not con­tain its own defi­ni­ti­on of out­put, in con­trast to input (Art. 3 no. 33). The term is used more fre­quent­ly, but in each case wit­hout a more detail­ed descrip­ti­on (e.g. in Reci­tal 12 in the defi­ni­ti­on of AIS → 13).

In some places, howe­ver, out­put is used in such a way that a Broad inter­pre­ta­ti­on assum­ing a uni­form use of this term (for exam­p­le in Annex III No. 8 lit. b, HRAIS when used to influence an elec­tion or vote: an AIS is not cover­ed here as HRAIS if its out­put does not direct­ly affect natu­ral per­sons, such as in the case of a tool for cam­paign orga­nizati­on: here, out­put can­not only mean the result of Gene­ra­ti­ve AI). For this rea­son and due to the pro­tec­ti­ve pur­po­se of the AIA, it makes sen­se to also include AI-gene­ra­ted con­trol signals under the con­cept of output.

The que­sti­on of when out­put is used in the EU is the­r­e­fo­re more important. Every spill­over can­not be meant. Rather, a cer­tain Tan­gi­ble impact in the EU, which, ana­log­ous to mar­ket con­duct rules, can only be con­cre­ti­zed by means of an ori­en­ta­ti­on. This is sup­port­ed in par­ti­cu­lar by Reci­tal 22, which aims to pre­vent cir­cum­ven­ti­on, but does not want to cover just any effect in the EU, speaks of “inten­ti­on” and gives the exam­p­le of a con­stel­la­ti­on in which the­re is cle­ar­ly not just a spillover:

In order to pre­vent cir­cum­ven­ti­on of this Regu­la­ti­on […], this Regu­la­ti­on should also app­ly to pro­vi­ders and ope­ra­tors of AI-systems estab­lished in a third coun­try to the ext­ent that the out­put gene­ra­ted by that system is inten­ded to be used in the Union.

and:

This is the case, for exam­p­le, whe­re an actor estab­lished in the Uni­on con­tracts cer­tain ser­vices to an actor estab­lished in a third coun­try in the con­text of an acti­vi­ty to be car­ri­ed out by an AI-system […]. In such cir­cum­stances, the AI-system ope­ra­ted by the actor in a third coun­try […] could pro­vi­de the con­trac­tu­al actor in the Uni­on with the out­put of that AI-system resul­ting from that processing […]. 

A sup­plier can the­r­e­fo­re not fall under the AIA sole­ly due to the use of the out­put as long as the out­put is not inten­ded for use in the EU, i.e. is used as inten­ded in the EU. The Blue Gui­de (→ 15) can pro­vi­de a cer­tain degree of con­cre­tizati­on, alt­hough it remains vague.

The area of appli­ca­ti­on is broad enough as it is. If an employee of a Swiss com­pa­ny sends an email to a French col­le­ague and uses AI-gene­ra­ted text in it, or if a pre­sen­ta­ti­on with an AI-gene­ra­ted image or a tran­script tran­scri­bed by AI is sent to a reci­pi­ent in the EU, this should be suf­fi­ci­ent, unless a de mini­mis thres­hold is deri­ved from the cri­ter­ion of per­cep­ti­bi­li­ty, which would be applied in addi­ti­on to the requi­re­ment of alignment.

Howe­ver, the que­sti­on must remain open for the time being – it can be assu­med that the EAIB (→ 53) will pro­po­se more spe­ci­fics here. Howe­ver, for non-EU actors who are only ope­ra­tors of a HRAIS and for actors who only deal with non-high-risk AIS, this que­sti­on is not quite as important as for HRAIS providers.

Due to the legal defi­ni­ti­on of the pro­vi­der, the que­sti­on can also be rai­sed as to whe­ther the use of the out­put alo­ne can be suf­fi­ci­ent at all or whe­ther In addi­ti­on, pla­cing on the mar­ket or com­mis­sio­ning in the EU is assu­med. Howe­ver, the­re are seve­ral argu­ments against this interpretation:

  • Reci­tal 22 extends the scope of appli­ca­ti­on to CIS “even if they are neither pla­ced on the mar­ket nor put into ser­vice or used in the Union”.

  • With regard to pro­vi­ders, Art. 2 would no lon­ger need to men­ti­on the use of out­put in this inter­pre­ta­ti­on becau­se pla­cing on the mar­ket in the EU alo­ne is suf­fi­ci­ent (Art. 2 para. 1). In the case of ope­ra­tors, on the other hand, the refe­rence to out­put would also be justi­fi­ed if the pro­vi­der is requi­red to place the pro­duct on the mar­ket or put it into service.

  • The nar­row inter­pre­ta­ti­on would lead to a situa­ti­on in which an ope­ra­tor can be sub­ject to the AIA, but not the pro­vi­der of the cor­re­spon­ding system. Sin­ce the obli­ga­ti­ons of the ope­ra­tor pre­sup­po­se, at least in part, that the pro­vi­der has also ful­fil­led its obli­ga­ti­ons (e.g. in the case of the reten­ti­on of log data, which is not pos­si­ble if the pro­vi­der has not ensu­red the log capa­bi­li­ty of the HRAIS), a par­al­le­lism is more likely.

  • The legal defi­ni­ti­on of pro­vi­der allo­ws the con­clu­si­on that pla­cing on the mar­ket or put­ting into ser­vice is only a pre­re­qui­si­te for pro­vi­der sta­tus if an enti­ty does not deve­lop an AIS its­elf, but has it deve­lo­ped. In the case of self-deve­lo­ped AIS, the deve­lo­p­ment of the AIS is suf­fi­ci­ent accor­ding to this inter­pre­ta­ti­on (→ 20).

  • For rea­sons of pro­tec­tion, aut­ho­ri­ties and courts will pro­ba­b­ly fol­low a broad inter­pre­ta­ti­on, i.e. allow the out­put to suf­fice. In any case, expe­ri­ence with the fun­da­men­tal rights-rela­ted inter­pre­ta­ti­on of the GDPR sup­ports this.

Until the issue has been cla­ri­fi­ed, it should the­r­e­fo­re be assu­med that the inten­ded use of the out­put is sufficient.

Howe­ver, one may won­der whe­ther a Use as Out­put is requi­red. This should be the case: Anyo­ne who wants AI-gene­ra­ted texts to be used in the EU will still be able to fall under the AIA if they use a screen­shot with the cor­re­spon­ding text. Howe­ver, anyo­ne who gene­ra­tes texts to illu­stra­te how an LLM works and uses gene­ra­ted texts as examp­les and not becau­se of their actu­al con­tent will hard­ly be using out­put in the EU.

Rol­lers

20 What is a provider?

The “sup­pliers” have the role that “manu­fac­tu­r­ers” have in pro­duct safe­ty law. They are the enti­ties that deve­lop AIS or a GPAIM (or have them deve­lo­ped under their con­trol) and place them on the mar­ket or put them into ser­vice (Art. 3 No. 3):

[…] an […] enti­ty that deve­lo­ps or has deve­lo­ped an AI system or an AI model with a gene­ral pur­po­se and places it on the mar­ket under its own name or trade­mark or puts the AI system into ope­ra­ti­on under its own name or trade­mark, whe­ther in return for payment or free of char­ge;

Pro­vi­ders bear the Main respon­si­bi­li­ty for the con­for­mi­ty of the AIS, e.g. through the con­for­mi­ty assess­ment pro­ce­du­re, risk manage­ment, ensu­ring data qua­li­ty during trai­ning and post-mar­ket sur­veil­lan­ce (→ 0).

Howe­ver, the wor­ding of Art. 3 No. 3 lea­ves Two inter­pre­ta­ti­ons to:

  • The con­di­ti­on that an AIS is pla­ced on the mar­ket or put into ope­ra­ti­on can gene­ral­ly apply

  • or only for the second case, in which an AIS is not deve­lo­ped in-hou­se (“has it developed”).

At first glan­ce, the first inter­pre­ta­ti­on is more obvious. Howe­ver, it is by no means unam­bi­guous. For ter­ri­to­ri­al appli­ca­ti­on, it is suf­fi­ci­ent to use the out­put in the EU (→ 19). It would be con­tra­dic­to­ry to wai­ve most of the obli­ga­ti­ons becau­se the enti­ty con­cer­ned does not also place the (HR)AIS used on the mar­ket or put it into ser­vice in the EU. In other words, this broad inter­pre­ta­ti­on of the con­cept of pro­vi­der resol­ves the inter­nal con­tra­dic­tion in Art. 2, as the use of the out­put in the EU is then cle­ar­ly suf­fi­ci­ent. This speaks in favor of the broa­der inter­pre­ta­ti­on of the con­cept of pro­vi­der, as is also advo­ca­ted in the literature.

Pla­cing on the mar­ket” (“Pla­cing on the mar­ket”; AIS or GPAIM) is defi­ned in Art. 3 No. 9 as the pro­cess by which a spe­ci­fic AIS or a spe­ci­fic GPAIM is made available on the Uni­on mar­ket for the first time:

  • This can be done once or per­ma­nent­ly, but only once for each indi­vi­du­al AIS or GPAIM. Anyo­ne who makes an AIS available to a cus­to­mer in the EU the­r­e­fo­re does not beco­me a pro­vi­der if the AIS has alre­a­dy been pla­ced on the mar­ket in the EU.

  • Pla­cing on the mar­ket implies an offer or an agree­ment to trans­fer owner­ship, pos­ses­si­on or other rights to the AIS or GPAIM, for a fee or free of char­ge. In the case of an AIS, this is the case, for exam­p­le, when an AIS is made available for use on pre­mi­se or as a SaaS offer, e.g. via an inter­face (API; see Reci­tal 97 and Art. 6 of the Mar­ket Sur­veil­lan­ce Ordi­nan­ce on Distance Sel­ling). Pla­cing on the mar­ket is car­ri­ed out by the sup­plier or – in the case of an AIS – an importer (see below). If they pass on an AIS to a dis­tri­bu­tor for fur­ther dis­tri­bu­ti­on, they are alre­a­dy pla­cing the AIS on the mar­ket (the sub­se­quent act of the dis­tri­bu­tor is then a “making available”).

  • On the other hand, pla­cing on the mar­ket would not include the import by a per­son for their own use, e.g. a cell pho­ne with AI appli­ca­ti­ons, the han­do­ver of an AIS for purely test pur­po­ses or the demon­stra­ti­on of an AIS at a trade fair (see the Blue Gui­de, sec­tion 2.3).

The “put into ope­ra­ti­on” (“Put­ting into ser­vice”; AIS) is then defi­ned in Art. 3 No. 11 as the pro­cess in which an AIS is han­ded over to the deployer for its first use, but also the provider’s own first use:

  • Anyo­ne who deve­lo­ps and uses an AIS is a pro­vi­der within the mea­ning of the AIA with the cor­re­spon­ding obligations.

  • Ope­ra­tors, importers, dis­tri­bu­tors or other bodies can also sub­se­quent­ly beco­me sup­pliers (→ 22).

Becau­se a pro­duct can be opti­mi­zed by instal­ling an AIS (“Embedded AIS”) does not its­elf beco­me an AIS, the manu­fac­tu­rer of the cor­re­spon­ding pro­duct does not beco­me a pro­vi­der within the mea­ning of the AIA if the embedded AIS is used under the name or brand of ano­ther entity.

With a Com­bi­na­ti­on of AIS each indi­vi­du­al sup­plier should also be con­side­red a sup­plier, pro­vi­ded that the com­pon­ents con­ti­n­ue to be used as inten­ded. Howe­ver, becau­se the AIA refers to “systems” and not soft­ware packa­ges, com­pon­ents can pro­ba­b­ly be con­side­red tog­e­ther as AIS if they form a func­tion­al unit.

Manu­fac­tu­rer of a regu­la­ted pro­ductwhich is sub­ject to pro­duct regu­la­ti­on in accordance with Annex I becau­se an AIS has been instal­led as a safe­ty com­po­nent (within the mea­ning of Art. 3 No. 14) and who place the pro­duct on the mar­ket or put it into ser­vice in their own name are then also dee­med to be sup­pliers (Art. 25 para. 3).

21 What is a deployer?

Ope­ra­tors do not design the system them­sel­ves, they mere­ly use it (Art. 3 No. 4) – accor­ding to gene­ral pro­duct safe­ty law, they are the­r­e­fo­re “end users”.

Howe­ver, the use of the AIS “under the aut­ho­ri­ty” of the ope­ra­tor, “on its own respon­si­bi­li­ty” (Art. 3 No. 4). This pre­sup­po­ses that the system is not ope­ra­ted sole­ly on behalf of ano­ther ope­ra­tor. It is unclear whe­ther it also requi­res the ope­ra­tor to con­fi­gu­re, con­trol, para­me­ter­i­ze, etc. the AIS hims­elf or whe­ther it is suf­fi­ci­ent for him to deci­de on its use. If one assu­mes the operator’s obli­ga­ti­ons and asks the que­sti­on of when the­se obli­ga­ti­ons can app­ly, a lower thres­hold is suf­fi­ci­ent; mere use wit­hout fur­ther con­trol would not be a pre­re­qui­si­te here. Accor­ding to this obvious view, “under it’s aut­ho­ri­ty” means that the use is not car­ri­ed out sole­ly in the sen­se of order pro­ce­s­sing or by an employee, but by an enti­ty that uses an AIS for its own pur­po­ses. Con­ver­se­ly, someone who uses an AIS for someone else is not an ope­ra­tor (but usual­ly a provider).

The ope­ra­tor must com­ply with the Ope­ra­ting ins­truc­tions hold (→ 35). This is essen­ti­al becau­se it deter­mi­nes, among other things, the inten­ded use of the AIS, i.e. the “inten­ded pur­po­se” (Art. 3 no. 12) for which the AIS is inten­ded, as well as the frame­work for cor­rect use. If the ope­ra­tor lea­ves this frame­work, he can beco­me a pro­vi­der (→ 22).

GPAIM lacks an ope­ra­tor becau­se a GPAIM can­not be ope­ra­ted (→ 39).

22 When does the ope­ra­tor beco­me a provider?

This que­sti­on is less easy to ans­wer than it initi­al­ly seems. Art. 25 AIA con­ta­ins the basic rule that an ope­ra­tor beco­mes a pro­vi­der under cer­tain cir­cum­stances (so-cal­led “dee­med provider”):

  • when he acts as a pro­vi­derby affixing his name or trade­mark to a HRAIS after it has been pla­ced on the mar­ket or put into ser­vice by the ori­gi­nal supplier,

  • when he uses the HRAIS chan­ges signi­fi­cant­ly (as defi­ned in Art. 3 No. 23 AIA), but wit­hout making the HRAIS a low-risk AIS, and

  • if he uses an AIS out­side of its inten­ded pur­po­se in such a way that he only uses it makes the HRAIS.

Only the dee­med pro­vi­der is dee­med to be the pro­vi­der in each case; the ori­gi­nal pro­vi­der is released from its respon­si­bi­li­ty in this respect. Howe­ver, it must coope­ra­te with the new pro­vi­der (Art. 25 para. 2). He can pri­ce this accor­din­gly. Howe­ver, the obli­ga­ti­on to coope­ra­te does not app­ly if the ori­gi­nal pro­vi­der has spe­ci­fi­ed that the AIS may not be con­ver­ted into an HRAIS – this the­r­e­fo­re also speaks in favor of a cor­re­spon­ding Con­tract design.

In con­trast, the mere use of a HRAIS is not suf­fi­ci­ent for clas­si­fi­ca­ti­on as a pro­vi­der. out­side the inten­ded use. On the con­tra­ry, the pro­vi­der must expect this to a cer­tain ext­ent, as Art. 9 para. 2 lit. b in addi­ti­on to Art. 25 shows: The provider’s RMS must also take into account the risks of fore­seeable cases of misu­se. Only when misu­se leads to a signi­fi­cant chan­ge or turns an AIS into a HRAIS does the ope­ra­tor beco­me a “dee­med pro­vi­der” in accordance with Art. 25. Anyo­ne who uses a chat­bot inten­ded for cus­to­mer sup­port to sel­ect job appli­cants the­r­e­fo­re beco­mes an HRAIS pro­vi­der, but not when used for employee satis­fac­tion sur­veys (no HRAIS).

Also a Fine tuning (→ 12) should not be suf­fi­ci­ent to beco­me the pro­vi­der of the cor­re­spon­din­gly fur­ther trai­ned AIS, unless the ope­ra­tor offers the AIS under its own name or uses it in such a way that it beco­mes a new HRAIS. It remains to be seen whe­ther the qua­li­fi­ca­ti­on of the pro­vi­der in the case of fine-tuning is based on Art. 25 or sim­ply on the unde­fi­ned ele­ment of “deve­lo­ping” accor­ding to Art. 3 No. 3. In the lat­ter case, the ope­ra­tor could rather be clas­si­fi­ed as a pro­vi­der in the case of fine-tuning. Howe­ver, the AIA gene­ral­ly uses the term “deve­lop” in a broa­der sen­se (e.g. in Art. 2 para. 6: no appli­ca­ti­on of the AIA to an AIS that was “deve­lo­ped” [and put into ope­ra­ti­on] sole­ly for rese­arch pur­po­ses). In addi­ti­on, Reci­tal 93 sepa­ra­tes the area of deve­lo­p­ment from the role of the ope­ra­tor. Abo­ve all, howe­ver, the fact that the user can hard­ly ful­fill the provider’s obli­ga­ti­ons in the event of a fine-tuning is likely to be signi­fi­cant becau­se his con­trol of the AIS does not go far enough. The ope­ra­tor of a GPAIS does not beco­me a pro­vi­der sim­ply by using a RAG (→ 12).

At GPAIM it also applies that the model beco­mes a GPAIS as soon as the model is made available as a pro­duct, even if only by sup­ple­men­ting it with a user inter­face (→ 39). The abo­ve requi­re­ments then app­ly. Anyo­ne who purcha­ses a GPAIM and then puts it into ope­ra­ti­on for a spe­ci­fic use case is the pro­vi­der of the resul­ting AIS.

23 What is an importer?

The importer is an enti­ty in the EU that imports a for­eign HRAIS (i.e. a HRAIS offe­red under a for­eign name or trade­mark) into the EU (Art. 3 No. 6).

The importer does not have to estab­lish con­for­mi­ty hims­elf, but his obli­ga­ti­ons are based on tho­se of the sup­plier – in other words, he is not mere­ly a resel­ler, but must

  • check that the con­for­mi­ty assess­ment has been car­ri­ed out, the tech­ni­cal docu­men­ta­ti­on in accordance with Art. 11 and Annex IV AIA is available, the HRAIS bears the CE mark and the sup­plier has appoin­ted an aut­ho­ri­zed repre­sen­ta­ti­ve (Art. 23 Para. 1), and

  • retain the docu­men­ta­ti­on for the atten­ti­on of the super­vi­so­ry aut­ho­ri­ties (para. 5).

  • If the­re is any doubt about com­pli­ance with the essen­ti­al requi­re­ments, the HRAIS must not be pla­ced on the mar­ket, and

  • in the case of hig­her risks (as defi­ned in Art. 79 para. 1), the sup­plier, the aut­ho­ri­zed repre­sen­ta­ti­ve and the com­pe­tent mar­ket sur­veil­lan­ce aut­ho­ri­ties must be infor­med accor­din­gly (Art. 23 para. 2).

24 What is a dea­ler (dis­tri­bu­tor)?

Accor­ding to Art. 3 No. 7, this is an enti­ty that obta­ins a HRAIS from a sup­plier, an importer or ano­ther dis­tri­bu­tor and makes it available on the Uni­on mar­ket wit­hout its­elf being a sup­plier or importer, i.e. after it has been pla­ced on the mar­ket. “Making available” means any sup­p­ly of an AIS or a GPAI for fur­ther dis­tri­bu­ti­on or use on the Uni­on mar­ket, whe­ther in return for payment or free of char­ge (Art. 3 no. 10).

Simi­lar to the importer, the trader must

  • check that the HRAIS bears the CE mark, that a decla­ra­ti­on of con­for­mi­ty and the ope­ra­ting ins­truc­tions are available and that the sup­plier or importer has indi­ca­ted their name or brand and has a QMS (→ 35).

  • If the­re are rea­sonable doubts about com­pli­ance with the essen­ti­al requi­re­ments, the HRAIS must again not be pro­vi­ded and the dis­tri­bu­tor must cont­act the sup­plier or importer.

  • If defects can­not be reme­di­ed, the HRAIS must be with­drawn from the mar­ket or recal­led (by the dis­tri­bu­tor, sup­plier or importer; Art. 24 para. 4).

  • In the event of hig­her risks (in accordance with Art. 79 para. 1), the sup­plier or importer and the com­pe­tent aut­ho­ri­ties must be infor­med (Art. 24 para. 4).

25 What is a pro­duct manufacturer?

This role is not legal­ly defi­ned eit­her. It is an enti­ty that manu­fac­tures a pro­duct into which an AIS is inte­gra­ted. Under cer­tain cir­cum­stances, this enti­ty beco­mes a pro­vi­der, name­ly when the AIS is a Safe­ty com­po­nent of their system, it falls under a pro­duct regu­la­ti­on accor­ding to Annex I and the pro­duct manu­fac­tu­rer makes the AIS available on the mar­ket as part of their own pro­duct in their own name or the pro­duct is put into ser­vice in the name of the pro­duct manu­fac­tu­rer after being made available on the mar­ket (Art. 25 para. 3). In this case, the pro­duct manu­fac­tu­rer must ensu­re that the instal­led AIS com­plies with the requi­re­ments (Reci­tal 87).

26 When must an aut­ho­ri­zed repre­sen­ta­ti­ve be appoin­ted in the EU?

Accor­ding to Art. 22, the Pro­vi­der of an HRAIS if it is estab­lished out­side the EU. Accor­ding to Art. 3 No. 5, an “aut­ho­ri­zed repre­sen­ta­ti­ve” is an enti­ty resi­dent or estab­lished in the EU that the pro­vi­der of an AIS or a GPAI model has aut­ho­ri­zed in wri­ting (i.e. pro­ba­b­ly in text form) and agrees to ful­fil the obli­ga­ti­ons under the AEOI or to car­ry out pro­ce­du­res on its behalf.

The tasks of the aut­ho­ri­zed repre­sen­ta­ti­ve are to be defi­ned in the con­tract, but include at least the cata­log accor­ding to Art. 22 para. 3, e.g. checking whe­ther the decla­ra­ti­on of con­for­mi­ty and the tech­ni­cal docu­men­ta­ti­on have been drawn up and the con­for­mi­ty assess­ment pro­ce­du­re has been car­ri­ed out, the pro­vi­si­on of cer­tain infor­ma­ti­on and docu­ments to the aut­ho­ri­ties and obli­ga­ti­ons to coope­ra­te in the regi­stra­ti­on of the HRAIS. Art. 54 con­ta­ins an ana­log­ous pro­vi­si­on for pro­vi­ders of a GPAIM (→ 39).

Aut­ho­ri­zed repre­sen­ta­ti­ves can resign their man­da­te and may even have to do so.

Ope­ra­tor and other par­ties other than the pro­vi­der are not obli­ged to appoint an aut­ho­ri­zed representative.

Pro­hi­bi­ted and high-risk appli­ca­ti­ons (HRAIS)

27 Which appli­ca­ti­ons are prohibited?

AIS or use cases with unac­cep­ta­ble risks are excep­tio­nal­ly pro­hi­bi­ted as “pro­hi­bi­ted prac­ti­ce”, i.e. the pla­cing on the mar­ket, put­ting into ser­vice or use of an AIS for a cor­re­spon­ding pur­po­se is pro­hi­bi­ted (Art. 5):

  • Sub­li­mi­nal influence (Art. 5 para. 1 lit. a): Mani­pu­la­ti­on that uncon­scious­ly influen­ces beha­vi­or, ther­eby dis­tort­ing a decis­i­on and caus­ing harm. This inclu­des, for exam­p­le, forms of decep­ti­on, e.g. through “dark pat­terns” or “nud­ging”, par­ti­cu­lar­ly through a pro­ce­du­re that is so low-thres­hold that it is not con­scious­ly per­cei­ved, e.g. in a vir­tu­al envi­ron­ment (Reci­tal 29). Intent to decei­ve is not a fun­da­men­tal pre­re­qui­si­te, as deli­be­ra­te decep­ti­on is only one vari­ant of the offense.

  • Exploi­ting vul­nerabi­li­tyon the basis of age, disa­bi­li­ty, etc. (Art. 5 para. 1 lit. b). This also con­cerns the harmful dis­tor­ti­on of decis­i­ons (Reci­tal 29). Pro­por­tio­na­te affir­ma­ti­ve action is not covered;

  • Social scoring (Art. 5 para. 1 lit. c): Assess­ment of per­sons accor­ding to social beha­vi­or or per­so­nal cha­rac­te­ri­stics over lon­ger peri­ods of time if per­sons are trea­ted unf­air­ly, i.e. if the use of AIS would have an unex­pec­ted or dis­pro­por­tio­na­te con­se­quence for the per­sons con­cer­ned. This does not include cre­dit rating, which is not pro­hi­bi­ted but is high­ly ris­ky (→ 32);

  • Risk assess­ment for cri­mi­nal offen­ses (pre­dic­ti­ve poli­cing) through pro­fil­ing (Art. 5 para. 1 lit. d; with exceptions);

  • Face reco­gni­ti­onCrea­ti­on of facial reco­gni­ti­on data­ba­ses through broad scra­ping of images from the inter­net or sur­veil­lan­ce foota­ge (Art. 5 para. 1 lit. e). The com­pa­ri­son of an image with images on the inter­net, for exam­p­le, would not be cover­ed becau­se this does not invol­ve scraping;

  • Emo­ti­on reco­gni­ti­on in the work­place or in edu­ca­tio­nal insti­tu­ti­ons (Art. 5 para. 1 lit. f; with excep­ti­ons for health or safe­ty-rela­ted con­cerns). Emo­ti­on reco­gni­ti­on in other are­as is not pro­hi­bi­ted. For exam­p­le, the tran­scrip­ti­on of calls with an eva­lua­ti­on of whe­ther a cus­to­mer advi­sor is suf­fi­ci­ent­ly fri­end­ly or whe­ther an employee expres­ses nega­ti­ve emo­ti­ons towards the com­pa­ny would be pro­hi­bi­ted. Becau­se the AIA does not work with the defi­ned term “emo­ti­on reco­gni­ti­on system” in this pro­hi­bi­ti­on, the reco­gni­ti­on of “inten­ti­ons” (Art. 3 No. 39) is not cover­ed; it must be about emo­ti­ons, but the basis of inten­ti­on reco­gni­ti­on can be not only bio­me­tric but also other data;

  • Cate­go­rizati­on accor­ding to bio­me­tric datato infer race, poli­ti­cal opi­ni­ons, reli­gious beliefs, sexu­al ori­en­ta­ti­on, etc. (Art. 5 para. 1 lit. g; with excep­ti­ons). The term “bio­me­tric data” is defi­ned in Art. 3 No. 34; it must rela­te to per­so­nal data. Howe­ver, AIS are exempt from the ban if the cate­go­rizati­on is only a neces­sa­ry ancil­la­ry func­tion of ano­ther com­mer­cial ser­vice for objec­ti­ve tech­ni­cal rea­sons (Art. 3 No. 40); for exam­p­le, if an online ser­vice uses body cha­rac­te­ri­stics for clot­hing purcha­ses (inso­far as this invol­ves bio­me­tric data);

  • Real-time bio­me­tric remo­te iden­ti­fi­ca­ti­on in publicly acce­s­si­ble are­as (Art. 5 para. 1 lit. h and para. 2 – 7; with excep­ti­ons). Authen­ti­ca­ti­on is not cover­ed (→ 29).

The Com­mis­si­on has also issued gui­de­lines on pro­hi­bi­ted prac­ti­ces (→ 51).

The­se pro­hi­bi­ti­ons may over­lap with other pro­hi­bi­ti­ons, e.g. pro­hi­bi­ti­ons on decep­ti­on under fair tra­ding law or data pro­tec­tion rest­ric­tions. The fact that an AIS is not pro­hi­bi­ted does not mean that it is gene­ral­ly per­mit­ted. Rest­ric­tions may ari­se, for exam­p­le, from data pro­tec­tion and fair tra­ding law.

28 What is a high-risk AI system?

AIS or use cases in sen­si­ti­ve are­as such as cri­ti­cal infras­truc­tu­re, edu­ca­ti­on, employment, essen­ti­al public ser­vices or law enforce­ment; they are sub­ject to the requi­re­ments that make up the main part of the AIA (→ 15). Art. 6 regu­la­tes the clas­si­fi­ca­ti­on of an AIS as HRAIS.

A distinc­tion must be made bet­ween two cases:

The first case under Art. 6 (1) con­cerns AIS cover­ed by a Pro­duct regu­la­ti­on accor­ding to Annex I becau­se the AIS or its use case is its­elf sub­ject to such regu­la­ti­on or becau­se it was instal­led as a safe­ty com­po­nent (within the mea­ning of Art. 3 No. 14) in such a pro­duct). The focus here is on the pro­duct risk, in par­ti­cu­lar risks to life and limb. Annex I distin­gu­is­hes bet­ween two categories:

  • The first cate­go­ry in Sec­tion A con­cerns pro­duct regu­la­ti­ons that are New Approach fol­low. The AIA is direct­ly appli­ca­ble here. This applies, for exam­p­le, to machi­nery, toys, explo­si­ves and medi­cal devices.

  • The second in Sec­tion B con­cerns pro­duct regu­la­ti­ons out­side the New Approach. The AIA is here Not direct­ly appli­ca­ble. Instead, the cor­re­spon­ding legal acts in Art. 102 ff. are adapt­ed so that the requi­re­ments from Chap­ter III Sec­tion 2 (Art. 8 ff., basic requi­re­ments for HRAIS) are taken into account in the sec­to­ral decree. This con­cerns means of trans­port (avia­ti­on, rail, motor vehic­les, etc.).

The pre­re­qui­si­te in each case is that the pro­duct or AIS as a pro­duct enables the per­for­mance of a Con­for­mi­ty assess­ment by third par­ties (Art. 6 para. 1 lit. b). Whe­ther this can also cover cases in which an inter­nal con­for­mi­ty assess­ment pro­ce­du­re is used is disputed.

The second case accor­ding to Art. 6 para. 2 con­cerns AIS that are in the Appen­dix III is men­tio­ned. Annex III con­cerns spe­ci­fic are­as of use; the point of refe­rence here is the­r­e­fo­re less a pro­duct risk than a risk of use. The fol­lo­wing cases are listed exhaus­tively, each of which rela­tes to the inten­ded use of the HRAIS (see 29 ff.):

  • Bio­me­tricsUse of AIS for remo­te bio­me­tric iden­ti­fi­ca­ti­on, bio­me­tric cate­go­rizati­on or emo­ti­on reco­gni­ti­on (see → 27);

  • Cri­ti­cal infras­truc­tu­reAIS, which ser­ves as a secu­ri­ty com­po­nent in cer­tain cri­ti­cal infras­truc­tures (→ 31);

  • Edu­ca­ti­on and voca­tio­nal trai­ningAIS for mana­ging access to edu­ca­tio­nal oppor­tu­ni­ties, asses­sing lear­ning out­co­mes or moni­to­ring exami­na­ti­ons (→ 30);

  • Employment, per­son­nel manage­ment and access to self-employmentAIS in the area of recrui­ting or for rele­vant decis­i­ons or the moni­to­ring and eva­lua­ti­on of per­for­mance or beha­vi­or (→ 30);

  • Basic ser­vices and bene­fitsAIS for asses­sing entit­le­ment to public sup­port (e.g. social insu­rance), cre­dit­wort­hi­ness assess­ment, risk and pre­mi­um deter­mi­na­ti­on in life and health insu­rance or tria­ge of emer­gen­cy calls, emer­gen­cy ope­ra­ti­ons and first aid (→ 32);

  • AIS to sup­port law enforce­ment aut­ho­ri­ties, in the area of migra­ti­on, asyl­um and bor­der con­trol and in the judi­cia­ry and demo­cra­tic opi­ni­on-forming (→ 33).

The inten­ded use of the AIS is decisi­ve here, wher­eby the inten­ded pur­po­se is eit­her set by the manu­fac­tu­rer (Art. 3 No. 12) or then by the ope­ra­tor who uses an AIS out­side the inten­ded pur­po­se (Art. 25 → Q22).

29 Which cases are high-risk in the field of biometrics?

Annex III No. 1 regu­la­tes use cases in the field of bio­me­trics. Three cases are covered:

  • The first case is the Bio­me­tric remo­te iden­ti­fi­ca­ti­on. This is legal­ly defi­ned in Art. 3 No. 41. It refers to AIS that are inten­ded to iden­ti­fy per­sons wit­hout their invol­vement and gene­ral­ly from a distance. This does not include authen­ti­ca­ti­on systems for pre­mi­ses and devices such as iris, face, vein and fin­ger­print scan­ners (see also reci­tal 54). Howe­ver, a came­ra moun­ted abo­ve a high­way would be cover­ed if an AIS com­pa­res the images with a database.

  • The second case con­cerns the Bio­me­tric cate­go­rizati­on of peo­p­le if an AIS is inten­ded to infer “sen­si­ti­ve or pro­tec­ted attri­bu­tes” (e.g. peo­p­le are cate­go­ri­zed into eth­nic groups using AI). This does not include (→ 27) cases in which the cate­go­rizati­on is only a secon­da­ry func­tion of ano­ther com­mer­cial ser­vice that is neces­sa­ry for objec­ti­ve tech­ni­cal rea­sons (Art. 3 No. 40).

  • The third case is AIS for Emo­ti­on reco­gni­ti­on. Accor­ding to Art. 3 No. 39, the­se are AIS that are inten­ded to detect or pre­dict “emo­ti­ons or inten­ti­ons”, but do so on the basis of bio­me­tric data. This applies, for exam­p­le, to an AIS that infers emo­ti­ons from the voice – colo­ring, trembling, etc. A con­clu­si­on on health is also likely to be cover­ed, in a broad inter­pre­ta­ti­on. Howe­ver, the basis must be bio­me­tric data. If emo­ti­ons (or inten­ti­ons) are asses­sed on the basis of e‑mails or other texts, this does not turn the AIS into a HRAIS. Howe­ver, the result may be dif­fe­rent in the work­place, whe­re an AIS beco­mes HRAIS if it is used to influence decis­i­ons on working con­di­ti­ons, pro­mo­ti­on, dis­mis­sal, etc., or to moni­tor per­for­mance or beha­vi­or (→ 30). Of cour­se, this also applies if the input data is bio­me­tric data.

30 Which cases are high-risk in the employment and edu­ca­ti­on sector?

As men­tio­ned, Annex III lists use cases that are con­side­red high-risk (→ 28). Annex III No. 3 con­cerns the voca­tio­nal and non-voca­tio­nal (fur­ther) education:

  • A first use case (lit. a) are AIS, which are to be used in order to Access or admis­si­on to edu­ca­tio­nal oppor­tu­ni­ties deter­mi­ne. “Deter­mi­ne” means “deter­mi­ne”, as can be seen from Reci­tal 56 – an AIS who­se inten­ded use is a gate­kee­per func­tion for edu­ca­tio­nal offers, e.g. in an admis­si­on or apti­tu­de test, is the­r­e­fo­re high­ly ris­ky. This applies not only to decis­i­ons on access as such, but also to the sel­ec­tion of dif­fe­rent edu­ca­tio­nal oppor­tu­ni­ties. “Deter­mi­ning” is more than just “par­ti­ci­pa­ting”. An AIS that makes recom­men­da­ti­ons for admis­si­on would the­r­e­fo­re be cover­ed by quite a lot.

  • A second case is an AIS that is used for the Assess­ments of “lear­ning out­co­mes” is deter­mi­ned. It is the­r­e­fo­re par­ti­cu­lar­ly about the assess­ment of exami­na­ti­ons. Howe­ver, the wor­ding of the law goes some­what fur­ther; the assess­ment of lear­ning out­co­mes seems to be suf­fi­ci­ent in its­elf. This would actual­ly also cover the cor­rec­tion func­tion in a lan­guage lear­ning pro­gram, for exam­p­le, if it uses an AIS, even if it is not­hing more than pas­sing a level.

  • The third case over­laps with the first: it con­cerns AIS, which is used for Assess­ment of the level of edu­ca­ti­on which someone is to recei­ve or to which they are admit­ted. Apti­tu­de tests are likely to be the main focus here. Howe­ver, it must be about edu­ca­ti­on – talent manage­ment with an AI-sup­port­ed assess­ment of sui­ta­bi­li­ty for ano­ther posi­ti­on would not be cover­ed here (but would fall under a dif­fe­rent use case, see below).

  • The fourth case con­cerns AIS, which is inten­ded to be used for Audit super­vi­si­on are used in edu­ca­ti­on and voca­tio­nal training.

It remains to be seen how far the Con­cept of edu­ca­ti­on is to be under­s­tood. Accor­ding to Reci­tal 56, this inclu­des “edu­ca­tio­nal and voca­tio­nal trai­ning insti­tu­ti­ons or pro­grams at all levels”, i.e. school edu­ca­ti­on, but pro­ba­b­ly also ear­ly and con­ti­nuing edu­ca­ti­on. Howe­ver, inter­nal trai­ning cour­ses that do not ser­ve the pur­po­se of fur­ther edu­ca­ti­on, such as com­pli­ance trai­ning, are hard­ly cover­ed. An AI-sup­port­ed eva­lua­ti­on of test que­sti­ons during such trai­ning should the­r­e­fo­re not be suf­fi­ci­ent. Howe­ver, it is a bor­der­line case, and this is whe­re the work­place-rela­ted use cases (see below) often come into play (in par­ti­cu­lar the beha­vi­oral and per­for­mance eva­lua­ti­on of an employee).

Annex III No. 4 spe­ci­fi­cal­ly con­cerns the Workspace. A distinc­tion must be made bet­ween the recruit­ment pro­cess and the employment relationship:

  • HRAIS are AIS that are used for the Recruit­ment or sel­ec­tion of appli­cants are to be used. This is a broad descrip­ti­on becau­se neither the type nor the effec­ti­ve­ness of the use is rest­ric­ted. This use case is also meant broad­ly, as the text makes clear: it is suf­fi­ci­ent for an AIS to “screen” or “fil­ter” applications.

As with all use cases in Annex III, howe­ver, this use must be in accordance with the inten­ded use. For­mu­la­ting a job adver­ti­se­ment with ChatGPT is the­r­e­fo­re not suf­fi­ci­ent. On the other hand, anyo­ne who builds an AIS that cate­go­ri­zes appli­ca­ti­ons on the basis of an Ope­nAI model is ope­ra­ting an HRAIS. It should also be suf­fi­ci­ent for an AIS to check appli­ca­ti­ons to see how well they match a job adver­ti­se­ment – a form of seman­tic search that cor­re­sponds to a “vie­w­ing” of applications.

  • Decis­i­ons on working con­di­ti­ons, pro­mo­ti­on and dis­mis­sal. Here it is pri­ma vista unclear whe­ther the AIS must make the­se decis­i­ons or mere­ly influence them. The legal text implies the lat­ter: it is a mat­ter of using it to make decis­i­ons which then – as a con­se­quence – influence working con­di­ti­ons, etc. Howe­ver, the AIS does not have to make the decis­i­on its­elf; it is suf­fi­ci­ent if it is inten­ded to sup­port a human decis­i­on on such points (the Eng­lish text is clea­rer: “inten­ded to be used to make decis­i­ons”, not “inten­ded to make decis­i­ons”). The pur­po­se of the law accor­ding to Reci­tal 57 also speaks in favor of this inter­pre­ta­ti­on (pro­tec­tion of care­er pro­s­pects and liveli­hoods from “noti­ceable influence”).

  • Two fur­ther use cases app­ly in addi­ti­on. One is a Assign­ment of tasks on the basis of beha­vi­or or per­so­nal cha­rac­te­ri­stics or attri­bu­tes, the other person’s Obser­va­ti­on and eva­lua­ti­on of per­for­mance and beha­vi­or. An AIS thus alre­a­dy beco­mes an HRAIS when beha­vi­or is eva­lua­ted, even if no decis­i­on on care­er advance­ment is sub­se­quent­ly made, pre­pared or influen­ced (even if AI-sup­port­ed per­for­mance or beha­vi­oral assess­ments are gene­ral­ly desi­gned for such decisions).

An HRAIS would the­r­e­fo­re be an AI-sup­port­ed eva­lua­ti­on of the per­for­mance of an employee in a call cen­ter. In con­trast, an AI-sup­port­ed opti­mizati­on of field ser­vice rou­tes would not be an HRAIS. In this case, the beha­vi­or of the rele­vant employees is eva­lua­ted. Howe­ver, the eva­lua­ti­on does not rela­te to this beha­vi­or, but abstracts from it. Sin­ce care­er advance­ment is not at risk in such a case, it should not con­sti­tu­te HRAIS. Howe­ver, if an AI-sup­port­ed eva­lua­ti­on is sub­se­quent­ly car­ri­ed out to deter­mi­ne whe­ther a dri­ver is fol­lo­wing the opti­mal rou­te, this would be an HRAIS. Human awa­re­ness of the result should not be a pre­re­qui­si­te. A dri­ving assi­stant that makes sug­ge­sti­ons depen­ding on the rou­te actual­ly taken would the­r­e­fo­re pro­ba­b­ly be an HRAIS. The same applies ana­log­ous­ly to an AIS that is used in pro­duc­tion to opti­mi­ze processes.

This does not say what all belongs to the “work area”. The inde­pen­dent work may be cover­ed, espe­ci­al­ly as Reci­tal 57 also men­ti­ons “access to self-employment”. All of the use cases men­tio­ned can also app­ly if the sel­ec­tion, decis­i­on, obser­va­ti­on or eva­lua­ti­on does not con­cern a depen­dent employee but a self-employed per­son. Howe­ver, an employee-like sta­tus, i.e. a cer­tain degree of depen­den­cy and sub­or­di­na­ti­on, must be requi­red; other­wi­se the­re is no cor­re­spon­ding need for protection.

31 Which cases are high-risk for cri­ti­cal infrastructures?

The AIA only pro­vi­des for one case here: An AIS is used (as inten­ded) as a safe­ty com­po­nent that is used in the con­trol or ope­ra­ti­on of a cri­ti­cal digi­tal infras­truc­tu­re in accordance with point 8 of the Annex to Direc­ti­ve 2022/2557 on the resi­li­ence of cri­ti­cal instal­la­ti­ons (CER Direc­ti­ve), https://dtn.re/D2CV56) and in the area of road traf­fic or water, gas, heat or elec­tri­ci­ty supply.

32 Which other cases in the pri­va­te sec­tor are high-risk?

Annex III No. 5 regu­la­tes three fur­ther cases that are rele­vant in the pri­va­te sector.

  • The first con­cerns AIS for the “Cre­dit­wort­hi­ness check and cre­dit rating” of natu­ral per­sons (but not legal enti­ties). This is rela­tively broad becau­se the AEOI does not defi­ne what is cover­ed by the­se terms. In any case, it is not only about cre­dit agen­ci­es and com­pa­ra­ble pro­vi­ders of cre­dit­wort­hi­ness infor­ma­ti­on, but also about com­pa­nies that car­ry out cor­re­spon­ding assess­ments for them­sel­ves or for group com­pa­nies (pro­vi­ded they are AI-supported).

  • Howe­ver, this does not app­ly to AIS that are used for “Detec­tion of finan­cial fraud” are used. The text speaks here of “are used” and not of “are inten­ded to be used”. This could lead to the con­clu­si­on that an AIS is not (or no lon­ger) an HRAIS even if its pri­ma­ry pur­po­se is to assess cre­dit­wort­hi­ness but it is only used to detect fraud. Howe­ver, this con­tra­dicts Reci­tal 58, which is nar­rower in this respect: it only exclu­des AIS that are “inten­ded” for fraud pre­ven­ti­on. At the same time, howe­ver, it is also broa­der: AIS that are inten­ded under EU law to detect finan­cial fraud or to cal­cu­la­te capi­tal requi­re­ments are not HRAIS. This could be a pro­blem for a Swiss finan­cial ser­vices pro­vi­der that uses an AIS to cal­cu­la­te Swiss capi­tal requi­re­ments (i.e. not on the basis of EU law) and makes the result available to its EU parent com­pa­ny and the­r­e­fo­re falls local­ly under the AI Act (→ 18).

  • An AIS also beco­mes a HRAIS if it is in the Insu­rance sec­tor for risk assess­ment or pre­mi­um deter­mi­na­ti­on, but only in the area of life or health insurance.

  • Ano­ther HRAIS is an AIS, which is used to tria­ge Emer­gen­cy calls or the deployment of para­me­dics, poli­ce or fire­figh­ters or the prio­ri­tizati­on of first aid.

  • Final­ly, accor­ding to Annex III No. 6, AIS are also inclu­ded as HRAIS if they are used for the deter­mi­na­ti­on of facts and the appli­ca­ti­on of the law by Arbi­tra­ti­on tri­bu­nals and media­tors (in addi­ti­on to the sta­te courts → 33). This would include, for exam­p­le, AIS that estab­lish the facts from files, but always sub­ject to sub­or­di­na­te sup­port within the mea­ning of Art. 6 para. 3 (→ 34), e.g. “the anony­mizati­on or pseud­ony­mizati­on of court judgments, docu­ments or data, com­mu­ni­ca­ti­on bet­ween staff or admi­ni­stra­ti­ve tasks” (Reci­tal 61). AIS can also be an HRAIS in the pri­va­te sec­tor in con­nec­tion with influen­cing elec­tions and votes (→ 33).

33 Which cases are high-risk in the public sector?

Annex III con­ta­ins some use cases that are only rele­vant in the public sec­tor (but include com­pa­nies acting on behalf of a public authority).

Annex III No. 5 con­cerns AIS for use by or on behalf of public aut­ho­ri­ties for the assess­ment of whe­ther an entit­le­ment to “Basic public sup­port and ser­vices” is to be rest­ric­ted or revo­ked. This applies, for exam­p­le, to social insu­rance or social assi­stance. Howe­ver, the­se cases are limi­t­ed to appli­ca­ti­on to natu­ral persons.

Annex III No. 6 con­cerns various use cases in the area of Pro­se­cu­ti­onand no. 7 in the area of Migra­ti­on, asyl­um and bor­der con­trol. Point 8(a) then con­cerns AIS for use by or on behalf of a judi­cial aut­ho­ri­ty (inclu­ding pri­va­te dis­pu­te reso­lu­ti­on → 32) to assist in the inve­sti­ga­ti­on of facts and appli­ca­ti­on of the law. Accor­ding to lit. b, AIS are also high-risk if they are used to influence the out­co­me of an elec­tion or vote or voting beha­vi­or. Howe­ver, this must be a direct influence – AIS instru­ments for the admi­ni­stra­ti­ve sup­port of cam­paigns are not covered.

34. are the­re cases in which a HRAIS is excep­tio­nal­ly not con­side­red high-risk?

Yes, in con­trast to the pro­duct-rela­ted high-risk cases, it is pos­si­ble to pro­ve the lack of high risk as an excep­ti­on for the use-rela­ted clas­si­fi­ca­ti­ons accor­ding to Annex III (→ 28).

Accor­ding to Art. 6 para. 3, this applies under two cumu­la­ti­ve conditions:

  • First­ly, the Inten­ded use of the AIS harm­less becau­se it neither ent­ails a grea­ter risk nor signi­fi­cant­ly influen­ces a decis­i­on (Reci­tal 53). This is the case if it is only inten­ded for this purpose,

  • per­form a “nar­row­ly defi­ned pro­ce­du­ral task” (e.g. struc­tu­ring uns­truc­tu­red data or cate­go­ri­zing data), or

  • as a mere addi­tio­nal lay­er to impro­ve the result of a human acti­vi­ty (e.g. in the now com­mon “impro­ve­ment” of a text writ­ten by a human), or to reco­gnize decis­i­on pat­terns or devia­ti­ons from pre­vious decis­i­on pat­terns (e.g. when checking whe­ther a human test devia­tes from a given pat­tern), or

  • to car­ry out a mere­ly pre­pa­ra­to­ry task for an assess­ment (e.g. in the case of a trans­la­ti­on of texts for fur­ther human use); in each case in more detail in Art. 6 para. 3 and reci­tal 53). The EU Com­mis­si­on (→ 51) should pro­po­se cla­ri­fi­ca­ti­ons here.

  • Second­ly, the AIS No pro­fil­ing (loc. cit.). For the term, the AIA refers to the GDPR (https://dtn.re/8YoXjh)Art. 4 No. 4.

A pro­vi­der who wis­hes to make use of this exemp­ti­on must docu­ment this assess­ment pri­or to pla­cing on the mar­ket or put­ting into ser­vice (Art. 6 para. 4). It must also regi­ster the AIS in the same way as an HRAIS (Art. 49).

Core obli­ga­ti­ons with HRAIS

35 What are the main obli­ga­ti­ons along the value chain?

Not all inter­me­dia­te steps in the value chain are the main trig­ger for obli­ga­ti­ons and requi­re­ments. In prin­ci­ple, AIS must meet the essen­ti­al requi­re­ments at the time they are pla­ced on the mar­ket or put into ser­vice (→ 15). From a prac­ti­cal per­spec­ti­ve, howe­ver, other pro­ce­s­ses also trig­ger cer­tain obligations.

The­se obli­ga­ti­ons can be bro­ken down as fol­lows, alt­hough the allo­ca­ti­on to indi­vi­du­al pha­ses can­not be cle­ar­ly defi­ned, as the AEOI does not legal­ly defi­ne all of the­se fac­tual­ly distinct stages as the start­ing point for obli­ga­ti­ons. Details on the indi­vi­du­al obli­ga­ti­ons can be found in the refe­ren­ced que­sti­ons and ans­wers. It should also be noted that the AEOI does not app­ly to the pro­ducts listed in Annex III Sec­tion B, but rather the obli­ga­ti­ons adopted in the respec­ti­ve pro­duct regu­la­ti­on (→ 28).

SystemRoleTrig­gerLegal con­se­quen­ces and requirements
Pro­vi­der
1HRAISPro­vi­derPro­cu­re­ment of system componentsIf the pro­vi­der pro­cu­res com­pon­ents from a sup­plier – this will invol­ve soft­ware com­pon­ents, becau­se in a com­bi­na­ti­on of hard­ware and soft­ware, only the soft­ware is likely to be con­side­red AIS and the pro­cu­re­ment of hard­ware the­r­e­fo­re does not con­sti­tu­te incor­po­ra­ti­on into an AIS (→ 28) – it must con­clude an AIS agree­ment with the sup­plier. Con­clude a con­tract. The con­tract must be in wri­ting, i.e. pro­ba­b­ly docu­men­ted in text form, and regu­la­te the essen­ti­al points for the HRAIS pro­vi­der (Art. 25 para. 4). The AI Office (→ 52) should pro­vi­de tem­pla­tes here.

Exclu­ded from this obli­ga­ti­on is the deli­very of a non-GPAI under a free and open source licen­se (FOSS), but such soft­ware pro­vi­ders are encou­ra­ged to pro­vi­de infor­ma­ti­on rele­vant to HRAIS pro­vi­ders (RecG 89).
2HRAISPro­vi­derTrai­ningAn HRAIS does not neces­s­a­ri­ly have to be trai­ned – this is not an obli­ga­ti­on in its­elf (Art. 10 para. 6; also not as a risk miti­ga­ti­on mea­su­re: Rec. 65), but rather a cir­cum­stance that can lead to clas­si­fi­ca­ti­on as an AIS (→ 13). In the case of trai­ning, howe­ver, cer­tain requi­re­ments app­ly.

First of all, the que­sti­on ari­ses Which data an HRAIS should or may be trai­ned. Art. 10 para. 3 (Data Gover­nan­ce) spe­ci­fi­es requi­re­ments for this: Test data must take into account cha­rac­te­ri­stics or ele­ments that are typi­cal for the frame­work con­di­ti­ons of the HRAIS in its inten­ded use, i.e. meaningful. This may include the use of Per­so­nal data or even par­ti­cu­lar­ly sen­si­ti­ve per­so­nal data (→ 58), e.g. for systems that clas­si­fy appli­ca­ti­ons and must be trai­ned in such a way that they have the wea­k­est pos­si­ble bias in terms of age, gen­der, eth­nic back­ground, etc. (some manu­fac­tu­r­ers the­r­e­fo­re include bias audits in their cus­to­mer docu­men­ta­ti­on). Art. 10 para. 5 the­r­e­fo­re con­ta­ins a legal basis for the use of such data for test­ing and trai­ning pur­po­ses, sub­ject to the con­di­ti­ons of para. 5 lit. a‑f.

For the trai­ning its­elf, the pro­vi­der must then make some decis­i­ons and docu­ment. This is set out in Art. 10 para. 2. It main­ly con­cerns the pro­cu­re­ment of trai­ning data, the pre­pa­ra­ti­on of test data (e.g. labe­l­ing, tag­ging, etc.), the defi­ni­ti­on of assump­ti­ons and tar­get values, the metrics for mea­su­ring whe­ther tar­gets are achie­ved or assump­ti­ons are cor­rect, or the avo­id­ance of bias.

Even during the trai­ning pha­se, the afo­re­men­tio­ned Risk manage­ment system (RMS) is rele­vant for HRAIS (see Art. 9). Alt­hough, as men­tio­ned, the pro­vi­der does not have to car­ry out trai­ning as a risk miti­ga­ti­on mea­su­re, it is nevert­hel­ess invi­ted to do so (Reci­tal 65). In this respect, the RMS should of cour­se also cover the trai­ning phase.
3HRAISPro­vi­derTest­ingUnli­ke a trai­ning cour­se, a test is a sepa­ra­te Man­da­to­ry of the HRAIS pro­vi­der (Art. 9 para. 6). HRAIS must be tested so that the risk can be deter­mi­ned and miti­ga­ted if neces­sa­ry. Tests must be car­ri­ed out at the appro­pria­te time, but befo­re pla­cing on the mar­ket or put­ting into ser­vice (para. 8). For tests by sup­pliers of GPAI models with syste­mic risks, see Q41.

For car­ry­ing out the tests, the fol­lo­wing Art. 9 and 10 Requi­re­ments. The requi­re­ments for trai­ning data also app­ly to test data (Art. 9 para. 6; this also applies to any use of per­so­nal data).

Accor­ding to Art. 9 para. 7, tests may also be car­ri­ed out for a maxi­mum of 12 months under Real con­di­ti­ons pro­vi­ded that the requi­re­ments of Art. 60 AIA are met. Such tests requi­re, among other things, a sepa­ra­te plan, which must be appro­ved by the com­pe­tent mar­ket sur­veil­lan­ce aut­ho­ri­ty (Art. 60 para. 4 lit. a and b).
4HRAISPro­vi­derPla­cing on the mar­ket or commissioningLegal­ly, the time at which the HRAIS is pla­ced on the mar­ket or put into ope­ra­ti­on is decisi­ve for the Most duties of the pro­vi­der. The pro­vi­der must the­r­e­fo­re take the­se into account when plan­ning and desig­ning an AIS that is poten­ti­al­ly an HRAIS.

First of all, the pro­vi­der must tech­ni­cal docu­men­ta­ti­on (Art. 11 and Annex IV). This is the core: The tech­ni­cal docu­men­ta­ti­on ser­ves to docu­ment com­pli­ance with the essen­ti­al requi­re­ments and is the­r­e­fo­re also the basis for the con­for­mi­ty assess­ment. In par­ti­cu­lar, it con­ta­ins a descrip­ti­on of the HRAIS, its com­pon­ents, its deve­lo­p­ment or trai­ning, inclu­ding the data used for this and the vali­da­ti­on and rele­vant tests, its func­tio­ning and archi­tec­tu­re, the gua­ran­tee of human super­vi­si­on (→ 37), its con­trol, the risk manage­ment system and the mar­ket sur­veil­lan­ce pro­ce­du­re (Annex IV).

Also the Ope­ra­ting ins­truc­tions (Art. 3 No. 15 and Art. 13 para. 3) is part of the tech­ni­cal docu­men­ta­ti­on (Annex IV No. 1 lit. h). This sta­tes and defi­nes the inten­ded use of the HRAIS (Art. 3 no. 15), which deter­mi­nes whe­ther the AIS is an HRAIS accor­ding to Annex III (→ 32) and helps to defi­ne the provider’s area of respon­si­bi­li­ty and is an essen­ti­al bench­mark for the com­pli­ance requi­re­ments (cf. for exam­p­le Art. 8 para. 1, Art. 10 para. 3 or Art. 26 para. 6 AIA). The ope­ra­ting ins­truc­tions must con­tain pre­cise, com­ple­te, cor­rect, clear and com­pre­hen­si­ble infor­ma­ti­on and be pro­vi­ded digi­tal­ly or phy­si­cal­ly, but wit­hout bar­riers (Art. 13 para. 3) and con­tain at least the infor­ma­ti­on pur­su­ant to Art. 13 para. 3 lit. a‑f. This inclu­des, among other things, the pur­po­se, cha­rac­te­ri­stics and per­for­mance limits of the HRAIS, the mea­su­res to ensu­re human super­vi­si­on, the life­span of the HRAIS and infor­ma­ti­on on main­ten­an­ce and updates and a descrip­ti­on of the log capa­bi­li­ty.

On the basis of the tech­ni­cal docu­men­ta­ti­on, the sup­plier must then check the Con­for­mi­ty assess­ment pro­ce­du­re (→ 15), and for each HRAIS it must pass through an EU-Decla­ra­ti­on of con­for­mi­ty and keep them for the atten­ti­on of the aut­ho­ri­ties (Art. 47). In addi­ti­on, he must keep a phy­si­cal or digi­tal CE mark apart from the indi­ca­ti­on of his name or trade­mark and a cont­act address (Art. 16 lit. b → 15).

The essen­ti­al duties also include the fol­lo­wing:

QMSAccor­ding to Art. 17, the pro­vi­der must have a qua­li­ty manage­ment system (QMS) that gene­ral­ly “ensu­res” com­pli­ance with the AIA, i.e. a system of poli­ci­es, pro­ce­s­ses and ins­truc­tions that covers all pha­ses of the HRAIS, inclu­ding a com­pli­ance con­cept with respon­si­bi­li­ties and accoun­ta­bi­li­ties, infor­ma­ti­on on the deve­lo­p­ment, test­ing and vali­da­ti­on of the HRAIS, whe­re appli­ca­ble the har­mo­ni­zed stan­dards used for con­for­mi­ty assess­ment, data gover­nan­ce (→ 36), mar­ket sur­veil­lan­ce (→ 43), inci­dent hand­ling (→ 45), com­mu­ni­ca­ti­on with aut­ho­ri­ties, requi­red docu­men­ta­ti­on and resour­ce manage­ment. The risk manage­ment system is also part of the QMS (Art. 17 para. 1 lit. g; the RMS can be mana­ged sepa­ra­te­ly, but must be cover­ed by the QMS).

RMSThe pro­vi­der must set up, app­ly, docu­ment and main­tain a risk manage­ment system (RMS) for each HRAIS (Art. 9). The RMS must accom­pa­ny the HRAIS throug­hout its life cycle – even after it has been pla­ced on the mar­ket or put into ser­vice – and be kept up to date, which requi­res appro­pria­te gover­nan­ce. In par­ti­cu­lar, risks to health, safe­ty or fun­da­men­tal rights, espe­ci­al­ly of vul­nerable per­sons, must be con­ti­nuous­ly iden­ti­fi­ed and asses­sed, not only in rela­ti­on to the inten­ded use, but also fore­seeable misu­se (Art. 9 para. 2 lit. b), and they must be ade­qua­te­ly miti­ga­ted as ear­ly as the design and deve­lo­p­ment pha­se, inso­far as the pro­vi­der can miti­ga­te the risks (lit. d). This also inclu­des, for exam­p­le, informing or trai­ning the ope­ra­tor (Art. 9 para. 5 lit. c). The iden­ti­fi­ed and accept­ed risks should then be men­tio­ned in the ope­ra­ting ins­truc­tions. The pro­vi­der can base the RMS on cor­re­spon­ding stan­dards (→ 61).

Ensu­re log capa­bi­li­tyThe pro­vi­der must ensu­re that the system is tech­ni­cal­ly capa­ble of log­ging (Art. 12). Art. 12 para. 2 – 3 spe­ci­fi­es what must be log­ged.

Com­pre­hen­si­bi­li­ty of the out­put (Art. 13): The pro­vi­der must ensu­re that the out­put of the system is clear and under­stan­da­ble for the ope­ra­tor. The ope­ra­ting ins­truc­tions (Art. 3 No. 15) ser­ve this pur­po­se, but design mea­su­res will also be requi­red.

Human super­vi­si­on (Art. 14): The HRAIS must be desi­gned in such a way that it enables effec­ti­ve human super­vi­si­on. This may include mea­su­res built into the HRAIS (e.g. user inter­faces, a kill switch, etc.), but also ins­truc­tions for the ope­ra­tor that enable him to under­stand the HRAIS suf­fi­ci­ent­ly (see Art. 11, Art. 14 para. 4 and Annex IV).

Relia­bi­li­ty, robust­ness and cyber secu­ri­ty (Art. 15): An HRAIS must be desi­gned in such a way that it is relia­ble and robust and ensu­res a suf­fi­ci­ent level of cyber secu­ri­ty. The pro­vi­der must the­r­e­fo­re ensu­re, among other things, that the HRAIS is suf­fi­ci­ent­ly resi­stant to phy­si­cal and digi­tal thre­ats and that sui­ta­ble mea­su­res are in place to pro­tect the inte­gri­ty, con­fi­den­tia­li­ty and avai­la­bi­li­ty of the HRAIS. The risk of bias and feed­back loops must be miti­ga­ted for systems that con­ti­n­ue to learn after being pla­ced on the mar­ket or put into ope­ra­ti­on. The EU Com­mis­si­on (→ 51) should con­tri­bu­te to the deve­lo­p­ment of bench­marks and metrics (Art. 15 para. 2 AIA).

Acce­s­si­bi­li­ty by design (Art. 16): Acce­s­si­bi­li­ty must be inte­gra­ted into the design of the HRAIS. The requi­re­ments are set out in detail in Direc­ti­ve 2016/2102 on the acce­s­si­bi­li­ty of the web­sites and mobi­le appli­ca­ti­ons of public sec­tor bodies and the DIRECTIVE 2019/882 on acce­s­si­bi­li­ty requi­re­ments for pro­ducts and ser­vices (Art. 16 lit. l).

Regi­stra­ti­onPro­vi­ders must regi­ster HRAIS with the Com­mis­si­on (→ 51) if they are to be clas­si­fi­ed as HRAIS in accordance with Annex III (Use Cases) (→ 28). To do so, they must pro­vi­de at least the infor­ma­ti­on spe­ci­fi­ed in Annex VIII Sec­tion A.
5HRAISPro­vi­derOccur­rence of spe­cial risksIf the sup­plier beco­mes awa­re of par­ti­cu­lar risks within the mea­ning of Art. 79 para. 1, he must imme­dia­te­ly inve­sti­ga­te the cau­ses and inform the com­pe­tent mar­ket sur­veil­lan­ce aut­ho­ri­ties (Art. 82 para. 2 → 45).
6HRAISPro­vi­derOccur­rence of a serious incidentIf a serious inci­dent (→ 45) is detec­ted, the sup­plier must imme­dia­te­ly inform the com­pe­tent mar­ket sur­veil­lan­ce aut­ho­ri­ties (→ 55), inve­sti­ga­te the inci­dent and miti­ga­te the risks. For sup­pliers of GPAIM with syste­mic risks, see below.
7AISPro­vi­derPla­cing on the mar­ket or put­ting into ser­vice in the EUIf an AIS is pla­ced on the mar­ket or put into ser­vice in the EU, the pro­vi­der is sub­ject to the AIA (→ 18) and must appoint an aut­ho­ri­zed repre­sen­ta­ti­ve in the EU (→ 26).
8AISPro­vi­derUse of out­put in the EUEven if an enti­ty uses an AIS in such a way that its out­put is used as inten­ded in the EU, it falls within the Scope of the AIA (→ 18) and must have a Be
aut­ho­ri­zed
in the EU (→ 26).
9AISPro­vi­derDeal­ing with AISA provider’s hand­ling of AIS also trig­gers the requi­re­ment for AI literacy (→ ).
10AISPro­vi­derGene­ra­ti­ve AISIn the case of AIS – this will pri­ma­ri­ly be GPAIS, but other AIS will also be cover­ed – that gene­ra­te syn­the­tic con­tent (audio, image, video, text), pro­vi­ders must ensu­re that the out­put is available in a mark­ed in machi­ne-rea­da­ble for­mat so that it is reco­gnizable as arti­fi­ci­al­ly crea­ted or mani­pu­la­ted (“water­mar­king” → 37).
11AISPro­vi­derAIS for direct inter­ac­tion with tho­se affectedIf an AIS (inclu­ding a HRAIS, if appli­ca­ble) is inten­ded for direct inter­ac­tion with data sub­jects, the pro­vi­der must ensu­re that natu­ral per­sons are infor­med of the Inter­ac­tion with an AIS informs (if it is not obvious in the given cir­cum­stances → 37).
Pro­duct manufacturer
12HRAISPro­duct manufacturerInstal­la­ti­on of an AIS in a productManu­fac­tu­r­ers of a regu­la­ted pro­duct that is sub­ject to pro­duct regu­la­ti­on under Annex I becau­se an AIS has been instal­led as a safe­ty com­po­nent (within the mea­ning of Art. 3 No. 14) and who place the pro­duct on the mar­ket or put it into ser­vice in their own name are dee­med to be sup­pliers within the mea­ning of the AIA (Art. 25 para. 3) and have the cor­re­spon­ding obligations.
Importers and distributors
13HRAISImporterImportThe obli­ga­ti­ons of the importer → 23 are much stric­ter than tho­se of the sup­plier becau­se the main respon­si­bi­li­ty remains with the sup­plier. First and fore­most, the importer has the duty to veri­fy the com­pli­ance mea­su­res of the sup­plier, and if he has doubts about the com­pli­ance of the HRAIS, he may not place the HRAIS on the mar­ket. If he encoun­ters risks within the mea­ning of Art. 79 para. 1, he must also inform the sup­plier, the aut­ho­ri­zed repre­sen­ta­ti­ves and the mar­ket sur­veil­lan­ce aut­ho­ri­ties (Art. 23 para. 2 → 45). Fur­ther obli­ga­ti­ons ari­se from Art. 23 para. 3 – 7.
14HRAISOpe­ra­tors, importers, dealersOccur­rence of spe­cial risksIf an ope­ra­tor or importer has rea­son to belie­ve that a HRAIS poses par­ti­cu­lar risks to health, safe­ty or fun­da­men­tal rights (Art. 79), he must imme­dia­te­ly inform both the sup­plier or dis­tri­bu­tor (in the case of the ope­ra­tor) or the sup­plier and his aut­ho­ri­zed repre­sen­ta­ti­ve (in the case of the importer) or the sup­plier and the importer or any other body invol­ved (in the case of the dis­tri­bu­tor) as well as the com­pe­tent mar­ket sur­veil­lan­ce aut­ho­ri­ty and sus­pend the use of HRAIS (Art. 26 para. 5, Art. 23 para. 2 and Art. 24 para. 4; Art. 82 para. 2 → 45).
15HRAISRetail­erDis­tri­bu­ti­onA trader is anyo­ne who makes a HRAIS available on the mar­ket (→ 20). The obli­ga­ti­ons of trad­ers are simi­lar to tho­se of importers (Art. 24).
Ope­ra­tor
16HRAISOpe­ra­torUseOpe­ra­tors (→ 21) must ensu­re that the HRAIS Inven­to­ry (this fol­lows indi­rect­ly from Art. 26). You must also ensu­re that all rele­vant ope­ra­ting data is auto­ma­ti­cal­ly Logs and stored for a spe­ci­fi­ed peri­od of time, and they must com­ply with the HRAIS Ins­truc­tions from the pro­vi­der (Art. 26 para. 1). They must also ensu­re that the input data is fit for pur­po­se (i.e. appro­pria­te to the pur­po­se of the HRAIS) and suf­fi­ci­ent­ly repre­sen­ta­ti­ve (Art. 26 para. 4 → 36).

Ano­ther key fac­tor is the human sur­veil­lan­ceThe ope­ra­tor must ensu­re that human super­vi­si­on is pos­si­ble during ope­ra­ti­on (Art. 26 para. 2) and must con­ti­nuous­ly moni­tor the ope­ra­ti­on of the system (Art. 26 para. 5). If they suspect a par­ti­cu­lar risk within the mea­ning of Art. 79 para. 1 (→ 45), they must inform the sup­plier or dis­tri­bu­tor and the mar­ket sur­veil­lan­ce aut­ho­ri­ty accor­din­gly and stop using the HRAIS (Art. 26 para. 5; which pre­sup­po­ses that they can react accor­din­gly). In the event of a serious inci­dent (→ 45), the sup­plier and then the importer or dis­tri­bu­tor and the mar­ket sur­veil­lan­ce aut­ho­ri­ty must be infor­med imme­dia­te­ly (see also Art. 73).
17HRAISOpe­ra­torUse in the workplaceIf an employer uses HRAIS in the work­place, it must inform the employees and employee repre­sen­ta­ti­ves that they will be affec­ted by its use (Art. 26 para. 7). Obli­ga­ti­ons to coope­ra­te under the appli­ca­ble law are reserved.
18HRAISOpe­ra­torUse for decisionsSpe­cial requi­re­ments app­ly if an HRAIS is to be used for decis­i­ons (it is also pos­si­ble that an AIS beco­mes an HRAIS as a result: Art. 25 and Annex III → 28). If the HRAIS Makes decis­i­onsthat have legal or other signi­fi­cant effects must be com­mu­ni­ca­ted to the data sub­jects (Art. 13 and 25 para. 11 → 37), and in the case of auto­ma­ted AI decis­i­ons data sub­jects have the right to object (Art. 86; in addi­ti­on, the rele­vant requi­re­ments of the appli­ca­ble data pro­tec­tion law may of cour­se app­ly). In addi­ti­on, the ope­ra­tor must ensu­re that the input data for the system is rele­vant, cor­rect and up-to-date (see above).
19HRAISOpe­ra­torBio­me­tric remo­te identificationIf an HRAIS is used for remo­te bio­me­tric iden­ti­fi­ca­ti­on within the mea­ning of Annex III No. I lit. a, the results must be checked and con­firm­ed sepa­ra­te­ly by at least two natu­ral, com­pe­tent per­sons befo­re decis­i­ons are made or mea­su­res taken (Art. 15 para. 4).
20HRAISOpe­ra­torUse of an emo­ti­on reco­gni­ti­on system or for bio­me­tric categorizationWhen using an emo­ti­on reco­gni­ti­on system or a system for bio­me­tric cate­go­rizati­on, the ope­ra­tors must inform the data sub­jects about the ope­ra­ti­on and the per­so­nal data used (→ 37).
21HRAISOpe­ra­torOccur­rence of a serious incidentIf a serious inci­dent is detec­ted, the ope­ra­tor must imme­dia­te­ly inform both the sup­plier and then the importer or dis­tri­bu­tor as well as the com­pe­tent mar­ket sur­veil­lan­ce aut­ho­ri­ties (Art. 26 para. 5 and Art. 72 → 45).
22AISOpe­ra­torOpe­ra­ti­onWhen ope­ra­ting an AIS, only the requi­re­ments for AI liter­a­cy app­ly (→ 38).
23AISOpe­ra­torUse for deepfakesIf an AIS (which can also be an HRAIS) is used for deepf­akes, the ope­ra­tor must dis­c­lo­se the arti­fi­ci­al pro­duc­tion (Art. 50 para. 4 → 37).
24AISOpe­ra­torGene­ra­ti­on of outputIf ope­ra­tors use an AIS to crea­te or mani­pu­la­te text and the text is published to inform the public about mat­ters of public inte­rest, they must dis­c­lo­se the arti­fi­ci­al crea­ti­on or mani­pu­la­ti­on (Art. 50 para. 4 → 37).
GPAIM
25Pro­vi­derPla­cing a GPAIM on the mar­ket in the EUPro­vi­ders of a GPAIM fall within the scope of the AEOI if they place a GPAIM on the mar­ket in the EU (→ 18). In this case, they must appoint an aut­ho­ri­zed repre­sen­ta­ti­ve in the EU (→ 26).
26Pro­vi­derOffe­ring a GPAIM for instal­la­ti­on in an AISThe AIA does not under­stand GPAIM as HRAIS, but as a preli­mi­na­ry stage to an AIS (→ 39). The pro­vi­der of the GPAI model must Sup­pliers who install GPAIMThe GPAI pro­vi­der must the­r­e­fo­re pro­vi­de infor­ma­ti­on about the GPAI model and its deve­lo­p­ment in accordance with the requi­re­ments of Annex XII (Art. 53 para. 1 lit. b). In par­ti­cu­lar, it must prepa­re tech­ni­cal docu­men­ta­ti­on (Art. 53 para. 1 lit. a), but not in accordance with Annex IV, as is the case for HRAIS pro­vi­ders, but in accordance with its own Annex XI.

Becau­se GPAI models are most­ly LLMs that are trai­ned with a mass of data, the pro­vi­der of the model must also have a Poli­cy on com­pli­ance with Euro­pean copy­right law (Art. 53 para. 1 lit. c; see Q0), and it must make details of the trai­ning data publicly available (Art. 53 para. 1 lit. d; the AI Office → 52 is to draw up a tem­p­la­te for this).

Howe­ver, pro­vi­ders who use the GPAIM under a free and open source licen­se (FOSS) if they make the para­me­ters of the model publicly available. A coun­ter-excep­ti­on applies to GPAI models with syste­mic risks (→ 39).

In con­trast, the gene­ral requi­re­ments for HRAIS pro­vi­ders do not app­ly to GPAIM pro­vi­ders (→ 39; as long as they are not also HRAIS providers).
27Pro­vi­derOffe­ring a GPAIM with syste­mic risksThe pro­vi­der of a GPAIM with syste­mic risks (→ 39) has the obli­ga­ti­ons of all GPAIM pro­vi­ders. In addi­ti­on, it must first sub­mit the model to the EUReport com­mis­si­onat the latest two weeks after the model has rea­ched the syste­mic risk thres­hold (Art. 52 para. 1). The Com­mis­si­on main­ta­ins a cor­re­spon­ding public list (Art. 52 para. 6), wher­eby the pro­vi­der can attempt to have its model dele­ted as not syste­mical­ly rele­vant (→ 41).

Fur­ther­mo­re, the rele­vant pro­vi­der is obli­ged under Art. 55 para. 1,

- assess syste­mic risks and miti­ga­te them if neces­sa­ry,

- eva­lua­te the model with regard to risk manage­ment, inclu­ding through adver­sa­ri­al test­ing or red team­ing,

- Docu­ment infor­ma­ti­on on serious inci­dents (→ 45) and pos­si­ble miti­ga­ti­on mea­su­res and inform the AI Office (→ 52) imme­dia­te­ly,

- ensu­re appro­pria­te cyber security.
36 What applies to the trai­ning, vali­da­ti­on and test­ing of AI systems?

Test­ing and vali­da­ti­on and, in par­ti­cu­lar, trai­ning are key aspects of AIS. The AIA con­ta­ins spe­cial regu­la­ti­ons for this:

  • Sup­pliers are obli­ga­ted to check the HRAIS befo­re pla­cing the pro­duct on the mar­ket or com­mis­sio­ning it. test (Art. 9 para. 6).

  • For the data used for trai­ning and test­ing pur­po­ses (see Art. 3 No. 29 and 31), sui­ta­ble “Data gover­nan­ce and data manage­ment pro­ce­du­res” are applied (Art. 10 para. 1 and 2). In par­ti­cu­lar, it must be regu­la­ted how the cor­re­spon­ding con­cep­tu­al decis­i­ons are made, which data are requi­red and how they are obtai­ned (in par­ti­cu­lar per­so­nal data), how data are pro­ce­s­sed (e.g. by anno­ta­ti­on, labe­l­ing, cle­an­sing, updating, enrich­ment and aggre­ga­ti­on), how test hypo­the­ses are for­med and how pos­si­ble bias is to be dealt with (Art. 10 para. 2).

  • Trai­ning, vali­da­ti­on and test data must – with a view to the inten­ded use of the HRAIS – be as rele­vant, repre­sen­ta­ti­ve, accu­ra­te and com­ple­te be. This also means that they must have sui­ta­ble sta­tis­ti­cal cha­rac­te­ri­stics (Art. 10 para. 3) and reflect or take into account the con­text of their use (para. 4).

  • Under cer­tain cir­cum­stances, a bias can only be pre­ven­ted or detec­ted if the data used for trai­ning, test­ing and vali­da­ti­on are Per­so­nal data are inclu­ded. In this case, Art. 10 para. 5 excep­tio­nal­ly con­ta­ins a legal basis within the mea­ning of Art. 6 and 9 GDPR, i.e. even for spe­cial cate­go­ries of per­so­nal data, pro­vi­ded that cer­tain con­di­ti­ons are met to ensu­re data mini­mizati­on and the pro­tec­tion of the data con­cer­ned. This must be docu­men­ted in the pro­ce­s­sing directory.

  • The pro­vi­der must inform the down­stream actors, in par­ti­cu­lar via the tech­ni­cal docu­men­ta­ti­on and the ope­ra­ting ins­truc­tions (→ 35). The tech­ni­cal docu­men­ta­ti­on must include infor­ma­ti­on on the trai­ning and the trai­ning data­sets used (Annex IV No. 2 lit. d; also for pro­vi­ders of a GPAIM in accordance with Annex XI No. 2 lit. b and Annex XII No. 2 lit. c if the GPAIM is to be inte­gra­ted into an AIS), and the Ope­ra­ting ins­truc­tions must also con­tain infor­ma­ti­on on the trai­ning, vali­da­ti­on and test data sets used (Art. 13 para. 3 lit. b no. 6).

  • HRAIS must gene­ral­ly have a suf­fi­ci­ent level of Cyber­se­cu­ri­ty gua­ran­tee. This also inclu­des ade­qua­te pro­tec­tion against attacks during the trai­ning pha­se, e.g. by mani­pu­la­ting the trai­ning data (“data poi­so­ning”) or pre-trai­ned com­pon­ents such as a GPAIM that are used during trai­ning (“model poi­so­ning”; Art. 15 para. 5).

  • Pro­vi­ders of a GPAIM must descri­be the trai­ning and test­ing pro­ce­du­re in the tech­ni­cal docu­men­ta­ti­on in the same way as pro­vi­ders of an HRAIS. docu­ment (Art. 53 para. 1 lit. a), and they must prepa­re a sum­ma­ry of the con­tent used for the trai­ning and publish (Art. 53 para. 1 lit. d; with the excep­ti­on of FOSS).

  • The quan­ti­ty of cal­cu­la­ti­ons used for the trai­ning is decisi­ve for the clas­si­fi­ca­ti­on of a GPAIM as one with syste­mic risks (Art. 51 para. 2).

  • The Mar­ket sur­veil­lan­ce aut­ho­ri­ties may request access to the trai­ning, vali­da­ti­on and test data sets, among other things (Art. 74 para. 12).

  • Ease­ments for trai­ning then app­ly within the scope of the Real labo­ra­to­ries (→ 48).

The­se obli­ga­ti­ons are natu­ral­ly direc­ted at the pro­vi­ders. Ope­ra­tor have other, sepa­ra­te obli­ga­ti­ons with regard to data qua­li­ty (→ 35).

37 How does the AI Act address the trans­pa­ren­cy obli­ga­ti­ons for AI systems, in par­ti­cu­lar for auto­ma­ted decisions?

The AI Act places par­ti­cu­lar empha­sis on trans­pa­ren­cy, espe­ci­al­ly for AIS that make decis­i­ons. This can also app­ly to AIS that no HRAIS are. In par­ti­cu­lar, the Chap­ter IV, with its sin­gle Art. 50, con­ta­ins cor­re­spon­ding pro­vi­si­ons, with the first two para­graphs rela­ting to pro­vi­ders and the fol­lo­wing two para­graphs to operators.

Pro­vi­ders have the fol­lo­wing obli­ga­ti­ons in particular:

  • System designPro­vi­ders must design the HRAIS in such a way that its ope­ra­ti­on is trans­pa­rent, i.e. that the expen­dit­u­re can be inter­pre­ted and con­scious­ly used (Art. 13 para. 1). The AIA does not con­clu­si­ve­ly spe­ci­fy how this is to be ensured.

  • Ope­ra­ting ins­truc­tionsHRAIS must be accom­pa­nied by ope­ra­ting ins­truc­tions (→ 35 No. 6).

  • Inter­ac­tion with tho­se affec­tedFor AIS that are inten­ded to inter­act with data sub­jects (e.g. chat­bots), they must be infor­med about the inter­ac­tion with an AIS – unless it is obvious in the cir­cum­stances (Art. 50 para. 1), e.g. in the case of a trans­la­ti­on ser­vice or a chat­bot such as ChatGPT. The pro­vi­ders of the cor­re­spon­ding AIS may have to ensu­re this. The desi­gna­ti­on as a “bot” may often be suf­fi­ci­ent for this purpose.

  • Syn­the­tic con­tentAIS pro­vi­ders must label syn­the­tic out­puts in a machi­ne-rea­da­ble for­mat and ensu­re that they are reco­gnizable as arti­fi­ci­al­ly gene­ra­ted or mani­pu­la­ted (Art. 50 para. 2). This obli­ga­ti­on applies to pro­vi­ders, not ope­ra­tors (see next point). Refe­rence should be made here to the work of the Coali­ti­on for Con­tent Pro­ven­an­ce and Authen­ti­ci­ty (C2PA; https://c2pa.org/).

HRAIS with a mere­ly sup­port­ing func­tion for stan­dard editing or wit­hout signi­fi­cant chan­ges to the input are exempt. No labe­l­ing obli­ga­ti­on the­r­e­fo­re applies, for exam­p­le, to DeepL or ChatGPT edi­ted texts writ­ten by a human. In addi­ti­on to the wor­ding, this must also app­ly ana­log­ous­ly to Art. 50 para. 4 if a text was gene­ra­ted by an AIS but revi­sed or at least rele­vant­ly checked by a human; in this case, the human has made the text their own, which is why it should no lon­ger be trea­ted as synthetic.

  • Human super­vi­si­onArt. 14 con­ta­ins pro­vi­si­ons to ensu­re human over­sight, which also have a trans­pa­ren­cy aspect.

Ope­ra­tors have the fol­lo­wing obli­ga­ti­ons in particular:

  • Deepf­akesAccor­ding to Art. 3 No. 60, deepf­akes are image, sound or video con­tent that is decep­tively simi­lar to real per­sons, objects, places, faci­li­ties or events. In this case, ope­ra­tors – here we are now tal­king about the use of AIS, not its deve­lo­p­ment – must dis­c­lo­se that the con­tent has been arti­fi­ci­al­ly crea­ted or mani­pu­la­ted (Art. 50 para. 4).

In the case of obvious­ly artis­tic, crea­ti­ve, sati­ri­cal, fic­tion­al or ana­log­ous works, the refe­rence to the arti­fi­ci­al pro­duc­tion or mani­pu­la­ti­on must be made in such a way that the pre­sen­ta­ti­on or enjoy­ment of the work is not impaired.

  • Gene­ra­ti­ve AISOpe­ra­tors of a gene­ra­ti­ve AIS must dis­c­lo­se that the con­tent has been arti­fi­ci­al­ly gene­ra­ted or mani­pu­la­ted (Art. 50 para. 4). Howe­ver, this only applies to published texts if they are inten­ded to inform the public about mat­ters of public inte­rest, and not if the gene­ra­ted texts have been human-edi­ted or edi­to­ri­al­ly con­trol­led and someone bears edi­to­ri­al respon­si­bi­li­ty for the publi­ca­ti­on. An excep­ti­on then applies to the area of law enforcement.

  • Emo­ti­on reco­gni­ti­onThe ope­ra­tor of a (non-pro­hi­bi­ted → 27) emo­ti­on reco­gni­ti­on system or a bio­me­tric cate­go­rizati­on system must inform the natu­ral per­sons con­cer­ned (Art. 50 para. 3; again with an excep­ti­on for the area of law enforcement).

  • Decis­i­onsIf the ope­ra­tor of a HRAIS accor­ding to Annex III (Use Cases → 28) uses the HRAIS to make or sup­port a decis­i­on that affects natu­ral per­sons, they must be infor­med accor­din­gly (Art. 26 para. 11).

  • Human super­vi­si­onArt. 26 con­ta­ins requi­re­ments for ope­ra­tors to exer­cise human over­sight, which also have a trans­pa­ren­cy aspect.

The man­da­to­ry infor­ma­ti­on must be pro­vi­ded in a clear, unam­bi­guous and acce­s­si­ble man­ner at the latest at the time of the first inter­ac­tion or sus­pen­si­on (Art. 50 para. 5).

At GPAIM trans­pa­ren­cy mea­su­res are also spe­ci­fi­ed, but sepa­ra­te­ly in Art. 53 (see → 40 and 42). Fur­ther requi­re­ments may app­ly accor­ding to other pro­vi­si­ons, e.g. in the pro­ce­s­sing of per­so­nal data from the infor­ma­ti­on and trans­pa­ren­cy obli­ga­ti­ons of the appli­ca­ble data pro­tec­tion law.

38 What requi­re­ments does AI place on “AI Literacy”?

AI liter­a­cy” or “AI com­pe­tence” refers to the skills requi­red for the com­pe­tent and risk-awa­re use of an AIS (Art. 3 No. 56). Art. 4 the­r­e­fo­re requi­res mea­su­res to impart this com­pe­tence to staff and auxi­lia­ry per­sons (inso­far as they are to hand­le an AIS). Trai­ning, ins­truc­tions and other infor­ma­ti­on can be con­side­red for this purpose.

This “ups­kil­ling” is the only expli­cit obli­ga­ti­on that the AIA impo­ses on pro­vi­ders and ope­ra­tors of all AIS. Howe­ver, such AIS may fall under sec­to­ral requi­re­ments, and if they are sup­plied to con­su­mers, gene­ral pro­duct safe­ty law may app­ly. Whe­ther the Swiss PrHG also applies to AIS that are not instal­led in a pro­duct such as a robot has not been con­clu­si­ve­ly cla­ri­fi­ed. Fur­ther obli­ga­ti­ons ari­se for AIS in spe­cial con­stel­la­ti­ons from the trans­pa­ren­cy requi­re­ments (→ 37).

GPAI

39 What is a gene­ral pur­po­se AI model (GPAIM)?

GPAIM are regu­la­ted sepa­ra­te­ly in their own Chap­ter V. This is due to the legis­la­ti­ve histo­ry, in which the regu­la­ti­on of GPAIs was con­tro­ver­si­al until the end (→ 3). Within the GPAI models, a par­ti­cu­lar­ly sen­si­ti­ve cate­go­ry is regu­la­ted, the GPAI models “with syste­mic risks” (→ 41).

GPAIM are “AI models” (not a defi­ned term) that are gene­ral­ly usable, per­form a “wide ran­ge of dif­fe­rent tasks com­pe­tent­ly” and can be inte­gra­ted into down­stream AIS (Art. 3 No. 63). This pri­ma­ri­ly con­cerns Lar­ge Lan­guage Models (LLMs) such as ChatGPT or Clau­de from Anthro­pic etc. Gene­ral usa­bi­li­ty is assu­med if a model has at least one bil­li­on para­me­ters and has been trai­ned with a lar­ge amount of data “under com­pre­hen­si­ve self-moni­to­ring” (Reci­tal 98 → 12). By con­trast, a model, e.g. an LLM, that has been trai­ned for a nar­row area of appli­ca­ti­on would not be a GPAIM.

It is important to note that a GPAIM is not an AIS. It is only crea­ted by Addi­ti­on of fur­ther com­pon­ents to AIS and, whe­re appli­ca­ble, to HRAIS (Reci­tal 97: “should be cle­ar­ly defi­ned and distin­gu­is­hed from the con­cept of AI systems”; “alt­hough AI models are essen­ti­al com­pon­ents of AI systems, they do not in them­sel­ves con­sti­tu­te AI systems”). So: GPAI model + addi­tio­nal com­po­nent = AIS. Litt­le is nee­ded for the step from GPAI model to (HR)AIS: a user inter­face is suf­fi­ci­ent (Reci­tal 63).

It is also pos­si­ble for a GPAIM to be built into ano­ther model, which then beco­mes a GPAIM (RecG 100). LLMs can also be trai­ned fur­ther (e.g. by fine-tuning → 12). If this nar­rows the scope of appli­ca­ti­on suf­fi­ci­ent­ly, it is conceiva­ble that the cor­re­spon­ding model will no lon­ger have gene­ral applicability.

The pro­vi­der of a GPAIM – i.e. the enti­ty that deve­lo­ps and places the GPAIM on the mar­ket – the­r­e­fo­re beco­mes the pro­vi­der of the (HR)AIS as soon as it puts the GPAIM to a spe­ci­fic use and the resul­ting AIS is made available on the mar­ket or put into ope­ra­ti­on. Fol­lo­wing this logic, Art. 53 requi­res, among other things, that the GPAIM pro­vi­der pro­vi­des the down­stream AIS pro­vi­der with cer­tain Pro­vi­ding infor­ma­ti­on (even if this is not a HRAIS).

An LLM (→ 12) from Ope­nAI would be an exam­p­le of a GPAI model. ChatGPT, on the other hand, has a user inter­face and is the­r­e­fo­re likely to be an AIS (even if this is not undis­pu­ted). If a third par­ty uses a model from Ope­nAI and builds its own chat­bot with it, this third par­ty and not Ope­nAI is the pro­vi­der of the chat­bot as an AIS. Of cour­se, this also applies if the third par­ty in que­sti­on fur­ther adapts the chat­bot to its own needs by fine-tuning it.

In addi­ti­on to the GPAIM, the AIA also defi­nes GPAIS (gene­ral pur­po­se AI systems; Art. 3 No. 66). GPAIS are a sub­set of AIS and are sub­ject to the cor­re­spon­ding regu­la­ti­ons. The AEOI the­r­e­fo­re only men­ti­ons GPAIS in pas­sing (in Art. 3 no. 68, Art. 25 para. 1 lit. c, Art. 50 para. 2 and Art. 75 para. 2, and in some recitals).

40 What are the obli­ga­ti­ons of GPAIM providers?

As men­tio­ned, the obli­ga­ti­ons of the GPAIM pro­vi­der (→ 20) are set out in a own chap­ter regu­la­ted. The requi­re­ments for HRAIS pro­vi­ders – in par­ti­cu­lar Art. 16 AIA and the pro­vi­si­ons refer­red to the­r­ein – do not app­ly to GPAIM pro­vi­ders. Howe­ver, GPAIM pro­vi­ders must:

  • a tech­ni­cal docu­men­ta­ti­on of the GPAIM, inclu­ding the trai­ning and test­ing pro­ce­du­re and the results of the assess­ment. The mini­mum infor­ma­ti­on is set out in Annex XI. It must be made available to the AI Office and the com­pe­tent natio­nal aut­ho­ri­ties on request (Art. 53 para. 1 lit. a). An excep­ti­on applies to FOSS (Art. 53 para. 2);

  • docu­ment fur­ther infor­ma­ti­on on the GPAIM (in par­ti­cu­lar in accordance with Annex XII) and sub­mit it to the Pro­vi­ders of down­stream AIS (Art. 53 para. 1 lit. b). The excep­ti­on for FOSS also applies (Art. 53 para. 2);

  • on a stra­tegy for Com­pli­ance with EU copy­right law have. This also inclu­des an indi­ca­ti­on of how, in the case of the text and data mining excep­ti­on (→ 59), a reser­va­ti­on of use within the mea­ning of Art. 4 (3) of the Copy­right Direc­ti­ve (https://dtn.re/c6zFb9)) is com­plied with (Art. 53 para. 1 lit. c. It should be noted that, accor­ding to Reci­tal 106, this requi­re­ment also applies to non-Euro­pean GPAIM pro­vi­ders who place a GPAIM on the mar­ket in the EU;

  • a sum­ma­ry of the Trai­ning data (the AI Office is to draw up a tem­p­la­te for this), sub­ject to busi­ness secrets (Reci­tal 107).

You may also have to pro­vi­de a Aut­ho­ri­zed repre­sen­ta­ti­ve (Art. 54 AEOI → 26). As else­whe­re, the Com­mis­si­on can fur­ther spe­ci­fy the requi­re­ments (→ 51).

41 How does the AIA regu­la­te GPAIM with syste­mic risks?

Syste­mic risks are risks that have a signi­fi­cant impact due to the “scope” of the GPAIM or due to pos­si­ble nega­ti­ve con­se­quen­ces “for public health, safe­ty, public secu­ri­ty, fun­da­men­tal rights or socie­ty as a who­le” and can spread across the enti­re value chain (Art. 3 No. 65).

Howe­ver, whe­ther this applies to a GPAIM is not deci­ded accor­ding to the legal defi­ni­ti­on, but accor­ding to the cri­te­ria of Art. 51 (1), accor­ding to which a syste­mic risk exists in two cases:

  • When the GPAIM is infor­med about “Capa­bi­li­ties with high effi­ci­en­cy”, which is to be asses­sed using sui­ta­ble methods such as bench­marks (Art. 51 para. 1 lit. a), but in any case exists if “the cumu­la­ti­ve amount of cal­cu­la­ti­ons used for its trai­ning” is more than 1025 floa­ting point ope­ra­ti­ons. Floa­ting point ope­ra­ti­ons, in turn, are defi­ned as a mathe­ma­ti­cal quan­ti­ty in Art. 3 No. 67. This thres­hold is likely to be adju­sted in the future (Reci­tal 111).

  • when the EU Com­mis­si­on deci­desthat a syste­mic risk exists, with Annex XIII pro­vi­ding the rele­vant cri­te­ria (lit. b and Art. 52 para. 4 – 5). This rela­tes to the per­for­mance of the model, expres­sed, among other things, by the num­ber of para­me­ters or the scope of the trai­ning data, but also the size of the model’s market.

The pro­vi­der must first sub­mit the GPAIM with syste­mic risks to the Report com­mis­si­on (→ 51) as soon as pos­si­ble once the GPAIM has rea­ched the syste­mic risk thres­hold, but at the latest after two weeks (Art. 52 para. 1). It may then attempt to pro­ve that its GPAIM does not nevert­hel­ess pose any syste­mic risks in excep­tio­nal cases if the initi­al qua­li­fi­ca­ti­on is based on the mate­ri­al cri­ter­ion of Art. 51 para. 1 lit. a. It must pre­sent appro­pria­te argu­ments to the Com­mis­si­on. If the Com­mis­si­on is not con­vin­ced, the GPAIM will be ente­red on the list of syste­mic risks (Art. 51 para. 3). – If the Com­mis­si­on has clas­si­fi­ed the GPAIM as syste­mical­ly ris­ky ex offi­cio, the pro­vi­der may request recon­side­ra­ti­on at any time (Art. 51 para. 5).

42 What are the obli­ga­ti­ons of GPAIM pro­vi­ders with syste­mic risks?

Pro­vi­der from GPAIM with syste­mic risks have addi­tio­nal obli­ga­ti­ons, i.e. in addi­ti­on to the obli­ga­ti­ons of pro­vi­ders of less sen­si­ti­ve GPAIM. They must (Art. 55):

  • eva­lua­te the GPAIM in a stan­dar­di­zed manner;

  • assess and redu­ce syste­mic risks at EU level;

  • docu­ment infor­ma­ti­on on serious inci­dents and pos­si­ble reme­di­al mea­su­res and inform the AI Office and the com­pe­tent natio­nal aut­ho­ri­ties if neces­sa­ry; and

  • ensu­re ade­qua­te cyber security.

AIS in operation

43 How is mar­ket sur­veil­lan­ce regulated?

Mar­ket sur­veil­lan­ce is a cen­tral ele­ment of the AIA – it is inten­ded to ensu­re both the com­pli­ance of AIS in the inte­rests of the per­sons con­cer­ned and a level play­ing field.

Pro­vi­ders must the­r­e­fo­re have a Mar­ket obser­va­ti­on system after the HRAIS has been pla­ced on the mar­ket (Art. 71 para. 1). This inclu­des the coll­ec­tion, docu­men­ta­ti­on and eva­lua­ti­on of data on the per­for­mance of the HRAIS (which may be pro­cu­red via the ope­ra­tors) during the enti­re life­cy­cle of the HRAIS.

This system inclu­des in par­ti­cu­lar a Plan for the obser­va­ti­on of the HRAIS after it has been pla­ced on the mar­ket. This plan is in turn part of the tech­ni­cal docu­men­ta­ti­on in accordance with Annex 4 (→ 35 No. 6); the Com­mis­si­on is yet to spe­ci­fy what such a plan should look like (Art. 72 (3)). If a HRAIS falls under Annex I Sec­tion A (e.g. medi­cal devices), pro­vi­ders can also inte­gra­te the requi­re­ments of the AIA into exi­sting systems and plans (Art. 72 para. 4).

Mar­ket sur­veil­lan­ce also inclu­des the obli­ga­ti­on to Non-com­pli­ance to react (→ 44), cer­tain Inci­dents (→ 45), and the cor­re­spon­ding powers of the authorities.

In gene­ral, AIS also repre­sent pro­ducts within the mea­ning of the Mar­ket
moni­to­ring ordi­nan­ce
(Art. 74 para. 1; https://dtn.re/JgakBQ)). The mar­ket sur­veil­lan­ce aut­ho­ri­ties (→ 55) can the­r­e­fo­re take action when­ever an AIS – it does not have to be an HRAIS – is likely to end­an­ger the health or safe­ty of users and does not com­ply with the appli­ca­ble har­mo­nizati­on legis­la­ti­on (Art. 16 para. 1 of the Mar­ket Sur­veil­lan­ce Regulation).

44 What applies if an HRAIS is not (or no lon­ger) compliant?

It is not only neces­sa­ry to react to serious inci­dents (→ 45), but of cour­se also when­ever a HRAIS no lon­ger meets the rele­vant requi­re­ments. The AIA not only places respon­si­bi­li­ty on the pro­vi­der, but also on other stakeholders.

If sup­pliers have rea­son to belie­ve that a HRAIS no lon­ger com­plies with the AIA at any time after it has been pla­ced on the mar­ket or put into ser­vice, they must rec­ti­fy the defect imme­dia­te­ly or, if neces­sa­ry, with­draw the HRAIS from the mar­ket. with­draw, deac­ti­va­te or recall (Art. 20 para. 1). “With­dra­wal” means that the pro­vi­si­on of a HRAIS alre­a­dy in the sup­p­ly chain is pre­ven­ted (Art. 3 No. 17), and “recall” means that HRAIS are retur­ned or at least taken out of ser­vice or swit­ched off (Art. 3 No. 16).

Pro­vi­ders also have to down­stream mar­ket accor­din­gly, i.e. the trad­ers, the ope­ra­tors, the aut­ho­ri­zed repre­sen­ta­ti­ve and the importers (Art. 20 para. 1). If the HRAIS also ent­ails a risk in accordance with Art. 79 para. 1 AEOI, the cor­re­spon­ding obli­ga­ti­ons app­ly (→ 45).

The down­stream actors are also inclu­ded in the event of non-com­pli­ance. In this case, importers may only place the HRAIS on the mar­ket once com­pli­ance has been resto­red (Art. 23 para. 2), and the same applies to dis­tri­bu­tors with regard to making it available on the mar­ket (Art. 24 para. 23).

Also Aut­ho­ri­zed repre­sen­ta­ti­ve have tasks: If they have rea­son to belie­ve that the sup­plier is in breach of the AIA, they must ter­mi­na­te their man­da­te and inform the com­pe­tent mar­ket sur­veil­lan­ce aut­ho­ri­ty and, if appli­ca­ble, the noti­fi­ed body of this and the rea­sons (Art. 22 para. 4).

45 How should inci­dents and spe­cial risks be dealt with?

As part of mar­ket sur­veil­lan­ce (→ 43), cer­tain inci­dents must be docu­men­ted and repor­ted. This obli­ga­ti­on applies to the pro­vi­ders of HRAIS, and is imple­men­ted by “Serious inci­dents” are trig­ge­red. The­se are mal­func­tions, but also gene­ral­ly inci­dents that lead direct­ly or indi­rect­ly to death or serious harm to health, to a “serious and irrever­si­ble” dis­rup­ti­on of the manage­ment or ope­ra­ti­on of cri­ti­cal infras­truc­tu­re, to a vio­la­ti­on of fun­da­men­tal rights or to serious dama­ge to pro­per­ty or the envi­ron­ment (Art. 3 No. 49).

If such an inci­dent occurs, the pro­vi­der must report the inci­dent to the respon­si­ble Mar­ket sur­veil­lan­ce aut­ho­ri­ties (→ 54), wher­eby spe­cial rules app­ly for cer­tain HRAIS. The noti­fi­ca­ti­on must be made imme­dia­te­ly upon dis­co­very by the pro­vi­der, but no later than 15 days after know­ledge by the pro­vi­der or also by the ope­ra­tor (Art. 73 para. 2).

If an inci­dent has wide­spread effects (“wide­spread inf­rin­ge­ment”) or affects a cri­ti­cal infras­truc­tu­re, the report­ing peri­od is shor­ten­ed to two days (Art. 73 para. 3 AIA), and in the event of death on ten days (para. 4). As in data pro­tec­tion law or for reports to FINMA, an initi­al report and a fol­low-up report can be used.

After the noti­fi­ca­ti­on, the mar­ket sur­veil­lan­ce aut­ho­ri­ties inform the com­pe­tent natio­nal aut­ho­ri­ties. If neces­sa­ry, they must also order within seven days that the HRAIS be recal­led or with­drawn from the mar­ket or that its making available on the mar­ket be pro­hi­bi­ted (Art. 73 para. 8 in con­junc­tion with Art. 19 of the Mar­ket Sur­veil­lan­ce Regu­la­ti­on ( Art. 19 of the Mar­ket Sur­veil­lan­ce Ordi­nan­ce (https://dtn.re/ElQE2G).

The pro­vi­der must also inve­sti­ga­te the inci­dent and assess the risks and, whe­re pos­si­ble miti­ga­te (Art. 73 para. 6 AEOI), in coope­ra­ti­on with the com­pe­tent authorities.

In addi­ti­on to the pro­vi­ders Ope­ra­tor Obli­ga­ti­ons in the event of a serious inci­dent. You must report such inci­dents to the Inform pro­vi­der (Art. 26 para. 5 and Art. 72). In the case of par­ti­cu­lar­ly sen­si­ti­ve HRAIS or when used in cri­ti­cal infras­truc­tures, con­trac­tu­al pro­vi­si­ons on this report­ing obli­ga­ti­on are to be expec­ted in prac­ti­ce, even if it alre­a­dy ari­ses from the AEOI.

A distinc­tion must be made bet­ween serious inci­dents and cases whe­re an HRAIS leads to par­ti­cu­lar risks, i.e. aty­pi­cal­ly high risks for health or safe­ty or fun­da­men­tal rights (Art. 79 para. 1). In this case, various roles have cor­re­spon­ding duties. If a mar­ket sur­veil­lan­ce aut­ho­ri­ty has rea­son to belie­ve that such risks exist, it exami­nes the AIS in que­sti­on and – if the assump­ti­on is con­firm­ed – informs the com­pe­tent natio­nal aut­ho­ri­ties. Also Ope­ra­tor have spe­cial obli­ga­ti­ons in such a system if it is a HRAIS.

46 What rights do data sub­jects and other bodies have?

All per­sons (natu­ral and legal per­sons) have the right to lodge a com­plaint with the com­pe­tent mar­ket sur­veil­lan­ce aut­ho­ri­ty (→ 55) if they have rea­son to belie­ve that a pro­vi­si­on of the AIA has been vio­la­ted (Art. 85 para. 1). A per­son does not have to be par­ti­cu­lar­ly affec­ted – com­pe­ti­tor com­plaints are also possible.

From a con­sidera­ble Decis­i­on data sub­jects also have the right to request an expl­ana­ti­on from the ope­ra­tor regar­ding the role of the AIS in the decis­i­on and the key ele­ments of the decis­i­on (→ 35 no. 13).

Affec­ted par­ties also have the right to lodge a com­plaint with the AI Office (Art. 89 para. 2). This also applies to pro­vi­ders who have incor­po­ra­ted a GPAIM into their own AIS.

In addi­ti­on, the­re are rights under other legal bases, in par­ti­cu­lar also under the appli­ca­ble Data pro­tec­tion law (→ 58) and, if appli­ca­ble, accor­ding to con­trac­tu­al regu­la­ti­ons. Claims for dama­ges may also be pos­si­ble under cer­tain circumstances.

Spe­cial questions

47 Are SMEs reli­e­ved of the bur­den of app­ly­ing the AEOI?

Imple­men­ting the requi­re­ments of the AIA will be chal­len­ging for SMEs, at least if they are acti­ve as pro­vi­ders. Anyo­ne who purcha­ses a GPAIM and places it on the mar­ket as an HRAIS beco­mes a pro­vi­der of the HRAIS – the­re are the­r­e­fo­re likely to be a lar­ge num­ber of SMEs that cover a spe­ci­fic use case on the basis of an LLM and are pro­vi­ders for this use case.

In prin­ci­ple, the pro­vi­si­ons of the AEOI tel quel also app­ly to SMEs. Howe­ver, the AEOI con­ta­ins some pro­vi­si­ons that are inten­ded to sup­port SMEs in the Eng­lish version:

  • Art. 62 obli­ges the Mem­ber Sta­tes to take sup­port mea­su­res by gran­ting SMEs prio­ri­ty access to AI real-world labo­ra­to­ries, car­ry­ing out awa­re­ness-rai­sing and trai­ning mea­su­res for SMEs, allo­wing que­sti­ons on the AEOI and AI real-world labo­ra­to­ries to be addres­sed and invol­ving SMEs in the deve­lo­p­ment of stan­dards (→ 15).

  • SMEs should par­ti­ci­pa­te in the Advi­so­ry Forum (Art. 67 para. 2).

  • The inte­rests of SMEs must be taken into account in codes of con­duct (Art. 95 para. 4).

  • A slight­ly lower rate applies to fines (Art. 99 para. 6).

For micro-enter­pri­ses within the mea­ning of Com­mis­si­on Recom­men­da­ti­on C(2003)1422 (https://dtn.re/U7vlKH)), Art. 63 (1) also pro­vi­des for a sim­pli­fi­ca­ti­on of the QMS (→ 35).

48 What are AI real labo­ra­to­ries and tests under real conditions?

The AIA is com­mit­ted to pro­mo­ting inno­va­ti­on in various reci­tals, and its grea­test con­tri­bu­ti­on to pro­mo­ting inno­va­ti­on is pro­ba­b­ly the fact that it is not a pro­hi­bi­ti­on law (with the few excep­ti­ons, Q0). Chap­ter VI (Art. 57 ff.) is then express­ly dedi­ca­ted to the pro­mo­ti­on of innovation.

Two main ele­ments ser­ve this pur­po­se. The first ele­ment is the “AI real labo­ra­to­ries” (the cor­re­spon­ding Eng­lish term is “AI regu­la­to­ry sandbox”):

  • This invol­ves faci­li­ta­ting the deve­lo­p­ment, trai­ning, test­ing and vali­da­ti­on of AIS befo­re they are pla­ced on the mar­ket or put into ser­vice in accordance with a plan to be agreed bet­ween the pro­vi­ders and the com­pe­tent aut­ho­ri­ty (Art. 57 para. 5) and, if neces­sa­ry, with the invol­vement of the data pro­tec­tion aut­ho­ri­ties (para. 10).

  • Art. 59 then con­ta­ins a limi­t­ed legal basis for the pro­ce­s­sing of Per­so­nal data in the con­text of a real-world labo­ra­to­ry: Per­so­nal data may be pro­ce­s­sed for deve­lo­p­ment, trai­ning and test­ing in the real-world labo­ra­to­ry, but only if cer­tain con­di­ti­ons are met and only when deve­lo­ping an AIS to safe­guard cer­tain public inte­rests. This legal basis is in addi­ti­on to the ana­log­ous legal basis for test­ing pur­po­ses under Art. 10 (→ 36).

  • Pro­vi­ders can then recei­ve pro­of of the acti­vi­ties car­ri­ed out in the real labo­ra­to­ry and a final report, which makes the Con­for­mi­ty assess­ment pro­ce­du­re or to faci­li­ta­te mar­ket sur­veil­lan­ce (para. 7). Com­pli­ance with the plan also pro­vi­des a safe har­bor against fines in the event of a vio­la­ti­on of the AEOI in con­nec­tion with the plan, but pos­si­bly also other requi­re­ments, in par­ti­cu­lar data pro­tec­tion law (para. 12).

  • Each Mem­ber Sta­te must set up at least one such labo­ra­to­ry by August 2, 2026 (Art. 57 para. 1). Howe­ver, the Com­mis­si­on is to issue more detail­ed regu­la­ti­ons befo­re then (Art. 58 AIA).

The second ele­ment are tests of Annex III-HRAIS under Real con­di­ti­ons:

  • HRAIS accor­ding to Annex III (i.e. the use case-rela­ted HRAIS; Q28) can be car­ri­ed out out­side an AI real labo­ra­to­ry under real con­di­ti­ons under cer­tain con­di­ti­ons (Art. 60). This requi­res that the test is con­trollable, i.e. that the test is effec­tively moni­to­red and that pre­dic­tions, recom­men­da­ti­ons or decis­i­ons of the AIS can be rever­sed or dis­re­gard­ed (Art. 60 para. 4 lit. j‑k). Serious inci­dents must be repor­ted in accordance with Art. 73, i.e. the cor­re­spon­ding report­ing obli­ga­ti­on (→ 45) is brought for­ward to the time befo­re pla­cing on the mar­ket or put­ting into ser­vice (Art. 60 para. 7).

  • Tests must be based on a plan to be appro­ved by the com­pe­tent mar­ket sur­veil­lan­ce aut­ho­ri­ty (Art. 60 para. 4 lit. a‑b).

  • Inso­far as the plan requi­res the par­ti­ci­pa­ti­on of test par­ti­ci­pan­ts, they must in prin­ci­ple con­sent to par­ti­ci­pa­ti­on (Art. 61 para. 4 lit. j and para. 5).

Sanc­tions & Governance

49 What applies to vio­la­ti­ons of the AIA?

Chap­ter XII con­cerns sanc­tions for vio­la­ti­ons of the AEOI. Unli­ke the GDPR, the AEOI its­elf does not con­tain any spe­ci­fic pro­vi­si­ons on fines, but requi­res the mem­ber sta­tes to intro­du­ce pro­vi­si­ons on fines and other enforce­ment mea­su­res in Art. 99. Fines can be impo­sed on all actors, i.e. all enti­ties invol­ved in value creation.

Depen­ding on the type of inf­rin­ge­ment, the fines can reach up to EUR 35 mil­li­on or 7% of turnover:

  • In the event of a vio­la­ti­on of the pro­hi­bi­ted prac­ti­ces (→ 27), the upper fine amount of up to EUR 35 mil­li­on or 7% of the world­wi­de annu­al tur­no­ver applies (Art. 99 para. 3). As with the GDPR, the group tur­no­ver is likely to be decisi­ve for this.

  • For cer­tain other inju­ries the upper limit for fines is EUR 15 mil­li­on or 3% of annu­al tur­no­ver (Art. 99 para. 4). The­se fines can be impo­sed on ope­ra­tors as well as noti­fi­ed bodies. This con­cerns vio­la­ti­ons of Art. 16 (sup­pliers), Art. 22 (aut­ho­ri­zed repre­sen­ta­ti­ves), Art. 23 (importers), Art. 24 (dis­tri­bu­tors), Art. 26 (ope­ra­tors) and Art. 31, 33 para. 1, 3 and 4 and Art. 34 (noti­fi­ed bodies) as well as Art. 50 (trans­pa­ren­cy; sup­pliers and operators).

  • In the case of wrong ans­wers to noti­fi­ed bodies or the com­pe­tent natio­nal aut­ho­ri­ties, the fine limit is EUR 7.5 mil­li­on or 1% of the annu­al tur­no­ver (Art. 99 para. 5).

The hig­her amount is decisi­ve in each case, except in the case of SMEs (here the lower amount; Art. 99 para. 6; → 47). In the spe­ci­fic case, the court or admi­ni­stra­ti­ve aut­ho­ri­ty (Art. 99 para. 9) must take into account the cri­te­ria of Art. 99 para. 7 when deter­mi­ning the fine, inclu­ding the seve­ri­ty of the fault.

At Pro­vi­ders of GPAIM Art. 101 con­ta­ins a spe­cial pro­vi­si­on. All vio­la­ti­ons of the AEOI can be punis­hed with a fine (Art. 101 para. 1 lit. a); howe­ver, Art. 101 para. 1 spe­ci­fi­cal­ly men­ti­ons cer­tain vio­la­ti­ons. The fine limit here is EUR 15 mil­li­on or up to 3% of the annu­al turnover.

The occur­rence of a serious inci­dent must of cour­se be distin­gu­is­hed from inju­ries (→ 45).

50 Which aut­ho­ri­ties play a role in the AEOI?

The AEOI regu­la­tes the role of seve­ral aut­ho­ri­ties pri­ma­ri­ly in its own chap­ter on “Gover­nan­ce” (Chap­ter VII, Art. 64 ff.). Various aut­ho­ri­ties and insti­tu­ti­ons are ent­ru­sted with dif­fe­rent and part­ly over­lap­ping tasks. The­re is both a hori­zon­tal divi­si­on of labor (within the EU) and a ver­ti­cal divi­si­on of labor (bet­ween the EU and the Mem­ber States).

The for­mer is gover­ned by Sec­tion 1 of Chap­ter VII (Gover­nan­ce). The Com­mis­si­on plays the lea­ding role in the EU bodies and is gene­ral­ly respon­si­ble for enfor­cing the AEOI. It has far-rea­ching powers, can issue spe­ci­fic pro­vi­si­ons and is respon­si­ble for recei­ving noti­fi­ca­ti­ons from stake­hol­ders and other aut­ho­ri­ties (→ 51).

The AI Office (“Office for Arti­fi­ci­al Intel­li­gence”) is part of the Com­mis­si­on and is respon­si­ble for the mar­ket sur­veil­lan­ce of GPAIM and AIS based on GPAIM of the same pro­vi­der (Art. 88 and 75; Q52).

The Euro­pean AI Board (EAIB) is to advi­se and sup­port the Com­mis­si­on (and the Mem­ber Sta­tes) in this (→ 53).

The natio­nal Mar­ket sur­veil­lan­ce aut­ho­ri­ties are respon­si­ble for moni­to­ring com­pli­ance with the AEOI (→ 54).

The noti­fy­ing natio­nal aut­ho­ri­ties are respon­si­ble for the assess­ment, desi­gna­ti­on, noti­fi­ca­ti­on and moni­to­ring of AI con­for­mi­ty assess­ment bodies (Art. 28).

Con­for­mi­ty assess­ment bodies are in turn bodies that check and assess the con­for­mi­ty of AIS in accordance with the AEOI (Art. 3 para. 21 → 15).

Due to the exten­si­ve coope­ra­ti­on obli­ga­ti­ons of the stake­hol­ders and the wide-ran­ging infor­ma­ti­on gathe­ring pos­si­bi­li­ties of the aut­ho­ri­ties, the Com­mis­si­on, the mar­ket sur­veil­lan­ce aut­ho­ri­ties and the noti­fi­ed bodies and all other bodies invol­ved in the appli­ca­ti­on of the AIAI are sub­ject to a Con­fi­den­tia­li­ty obli­ga­ti­on (Art. 78).

51 What tasks does the EU Com­mis­si­on have within the frame­work of the AEOI?

The main role at EU level lies with the Com­mis­si­on and the AI Office as part of the Com­mis­si­on (→ 52).

The Com­mis­si­on, which moni­tors com­pli­ance with EU law in accordance with Art. 17 Para. 1 of the Trea­ty on Euro­pean Uni­on, has a cen­tral role in this. Its powers can be cate­go­ri­zed as fol­lows (not enti­re­ly com­ple­te – some other, sub­or­di­na­te tasks of the Com­mis­si­on are not listed):

Con­cre­tiz­ing legi­ti­mizati­onArt. 97 AEOI con­fers on the Com­mis­si­on, on the basis of Art. 290 of the EU Trea­ty (https://dtn.re/9MhpKX)) the right to adopt bin­ding “dele­ga­ted acts”. The EU Trea­ty distin­gu­is­hes bet­ween dele­ga­ted acts and imple­men­ting acts. Dele­ga­ted acts are legal acts sup­ple­men­ting or amen­ding the basic legal act (the AEOI, which the Com­mis­si­on sub­mits to the Coun­cil and Par­lia­ment for appr­oval or rejec­tion. Imple­men­ting acts are mere­ly imple­men­ting pro­vi­si­ons such as tech­ni­cal pro­vi­si­ons, exemp­ti­ons, etc., which are not sub­mit­ted to Par­lia­ment and the Council.

The aut­ho­ri­ty, dele­ga­ted acts is based on Art. 97 AEOI and is inten­ded to make it pos­si­ble to reflect the par­ti­cu­lar­ly rapid tech­ni­cal deve­lo­p­ments in the area of AI. This con­cerns the fol­lo­wing points:

  • the cri­te­ria for when an AIS beco­mes a HRAIS (→ 28), and by ana­lo­gy Annex III (Use Cases; Art. 7 para. 1 and 3);

  • Annex IV on the mini­mum con­tent of the tech­ni­cal docu­men­ta­ti­on (Art. 11 Para. 3);

  • Anne­xes VI and VII and Art. 43 (1) and (2) on the con­for­mi­ty assess­ment pro­ce­du­re and Annex V on the con­tent of the EU decla­ra­ti­on of conformity;

  • the cri­te­ria for clas­si­fy­ing a GPAIM as syste­mical­ly ris­ky in accordance with Art. 51 (1) and (2) and Annex XIII;

  • Anne­xes XI and XII on the con­tent of the tech­ni­cal docu­men­ta­ti­on and trans­pa­ren­cy requi­re­ments for down­stream use of GPAIM (Art. 53).

In addi­ti­on, the Com­mis­si­on can Imple­men­ting acts issued. In doing so, it must gene­ral­ly com­ply with the Imple­men­ting Powers Ordi­nan­ce (https://dtn.re/B9uV04)) (Art. 98 para. 2:

  • Inter­ve­ne if a Mem­ber Sta­te does not meet the requi­re­ments for noti­fi­ed bodies not ful­fil­led (Art. 37 para. 4);

  • Appr­oval of Prac­ti­cal gui­des in con­nec­tion with GPAIM pur­su­ant to Art. 56, in gene­ral and in par­ti­cu­lar to spe­ci­fy the trans­pa­ren­cy requi­re­ments for AI-gene­ra­ted or mani­pu­la­ted con­tent (Art. 50 para. 7), the obli­ga­ti­ons of GPAIM pro­vi­ders pur­su­ant to Art. 53 and of syste­mical­ly ris­ky GPAIM pur­su­ant to Art. 55 (Art. 56 para. 6);

  • Decree com­mon spe­ci­fi­ca­ti­onsif rele­vant stan­dards are miss­ing (Art. 41 AIA), and com­mon rules in the area of GPAIM if no code of con­duct exists by August 2, 2025 (Art. 56 para. 9)

  • Con­cre­tiz­ing regu­la­ti­ons for AI real labo­ra­to­ries (Art. 58 para. 1 and 2) and tests of HRAIS under real con­di­ti­ons (Art. 60);

  • Pro­vi­si­ons for the estab­lish­ment of a sci­en­ti­fic panel inde­pen­dent expert (Art. 68 para. 1 and 5 and Art. 69 para. 2);

  • Con­cre­tizati­ons for the Mar­ket obser­va­ti­on plan the pro­vi­der of HRAIS (Art. 72 para. 3);

  • Con­cre­tizati­on of the Sanc­tion pro­ce­du­re (Art. 101 para. 6).

The Com­mis­si­on may then be aut­ho­ri­zed by the Issuing gui­de­lines and stan­dar­dizati­on con­tri­bu­te to the stan­dar­dizati­on of prac­ti­ce:

  • The Com­mis­si­on should gene­ral­ly hold the reins in the appli­ca­ti­on of the AEOI. For exam­p­le, it issues Stan­dar­dizati­on orders in accordance with Art. 10 of the Stan­dar­dizati­on Ordi­nan­ce, (https://dtn.re/BRL10Q)), i.e. man­da­tes for the deve­lo­p­ment of tho­se stan­dards who­se com­pli­ance gives rise to a pre­sump­ti­on of con­for­mi­ty (Art. 40 (1) and (2)), and can – in the absence of rele­vant stan­dards – issue cor­re­spon­ding “com­mon spe­ci­fi­ca­ti­ons” (Art. 41 AIA.

  • Accor­ding to Art. 96 AEOI, it can also gene­ral­ly Gui­de­lines for the prac­ti­cal imple­men­ta­ti­on of the AEOI. Alt­hough Art. 96 con­ta­ins a list of points to be con­cre­ti­zed – in par­ti­cu­lar the defi­ni­ti­on of AIS, the appli­ca­ti­on of Art. 8 et seq. with the basic requi­re­ments, the clas­si­fi­ca­ti­on as HRAIS (Art. 6 para. 5), the pro­hi­bi­ted prac­ti­ces and trans­pa­ren­cy in accordance with Art. 50 AEOI – it is not exhaustive.

  • The Com­mis­si­on Appro­ved fur­ther­mo­re Prac­ti­cal gui­des in accordance with Art. 56 AEOI, i.e. a spe­ci­fi­ca­ti­on of the obli­ga­ti­ons of GPAIM providers.

  • It also Tem­pla­tes and forms which are likely to be of con­sidera­ble importance in prac­ti­ce. It is inten­ded to pro­vi­de a sim­pli­fi­ed form for the tech­ni­cal docu­men­ta­ti­on for HRAIS of SMEs (Art. 11; Annex IV).

The Com­mis­si­on con­ti­nues to Mes­sa­ges and Reports against:

  • real-time bio­me­tric remo­te iden­ti­fi­ca­ti­on for law enforce­ment pur­po­ses: Noti­fi­ca­ti­on by Mem­ber Sta­tes of rele­vant legal bases (Art. 5 (5)) and annu­al reports by natio­nal mar­ket sur­veil­lan­ce and data pro­tec­tion aut­ho­ri­ties (Art. 5 (6));

  • Con­for­mi­ty assess­ment: Noti­fi­ca­ti­on by the noti­fy­ing aut­ho­ri­ties of con­for­mi­ty assess­ment bodies (Art. 30 (2) f. and Art. 36 (1), (4) and (7)); noti­fi­ca­ti­on by the mar­ket sur­veil­lan­ce aut­ho­ri­ties of exemp­ti­ons for HRAIS under Art. 46 (1) (Art. 46 (3); the Com­mis­si­on may intervene);

  • GPAIM: Noti­fi­ca­ti­on of GPAIM pro­vi­ders with syste­mic risks (Art. 52 para. 1);

  • Noti­fi­ca­ti­on of the noti­fy­ing aut­ho­ri­ties and mar­ket sur­veil­lan­ce aut­ho­ri­ties by the Mem­ber Sta­tes (Art. 70 (2) and (6));

  • Noti­fi­ca­ti­on of the natio­nal aut­ho­ri­ties of serious inci­dents (Art. 73 para. 11) in accordance with the Mar­ket Sur­veil­lan­ce Ordi­nan­ce; (https://dtn.re/ubfeIK);

  • annu­al noti­fi­ca­ti­on by the mar­ket sur­veil­lan­ce aut­ho­ri­ties of infor­ma­ti­on from mar­ket sur­veil­lan­ce and the use of pro­hi­bi­ted prac­ti­ces (Art. 74 para. 2);

  • Noti­fi­ca­ti­on by the Mem­ber Sta­tes of the natio­nal aut­ho­ri­ties or public bodies respon­si­ble for the super­vi­si­on of the pro­tec­tion of fun­da­men­tal rights (Art. 77 (1) and (2));

  • Infor­ma­ti­on from the Mem­ber Sta­tes in con­nec­tion with ris­ky AIS within the mea­ning of Art. 79 (1)(Art. 79 (3) et seq.);

  • Infor­ma­ti­on from the Mem­ber Sta­tes in con­nec­tion with ris­ky AIS that the pro­vi­der has clas­si­fi­ed as not high-risk (Art. 80 para. 3) and with com­pli­ant HRAIS that nevert­hel­ess ent­ail a par­ti­cu­lar risk (Art. 82 para. 1 and 3);

  • Noti­fi­ca­ti­ons from the Mem­ber Sta­tes on their sanc­tio­ning and other enforce­ment pro­vi­si­ons and on their fining prac­ti­ces (Art. 99 para. 2 and 11); noti­fi­ca­ti­on from the EDPS on his fining prac­ti­ces (Art. 100 para. 7).

The Com­mis­si­on has fur­ther Inter­ven­ti­on and decis­i­on-making powers:

  • Sanc­tio­ning of pro­vi­ders of GPAIM (Art. 101 para. 1);

  • Objec­tions to excep­tio­nal aut­ho­rizati­ons for HRAIS pur­su­ant to Art. 46 para. 1 (Art. 46 para. 4 and 5);

  • Clas­si­fi­ca­ti­on of a GPAIM as syste­mical­ly ris­ky (Art. 52 para. 2 – 5);

  • Assess­ment of the pro­ce­du­res that pro­vi­ders of GPAIM or syste­mical­ly ris­ky GPAIM can use to pro­vi­de evi­dence of their respec­ti­ve obli­ga­ti­ons under Art. 53 or 55 (whe­re no har­mo­ni­zed stan­dards exist; Art. 53 para. 4 and 55 para. 2);

  • Inter­ve­ne if an AIS with par­ti­cu­lar risks within the mea­ning of Art. 79 para. 1 is not com­pli­ant or a com­pli­ant HRAIS is nevert­hel­ess par­ti­cu­lar­ly ris­ky and the Com­mis­si­on does not agree with the mea­su­res taken by the com­pe­tent mar­ket sur­veil­lan­ce aut­ho­ri­ty (Art. 81 and 82).

Final­ly, the Com­mis­si­on pro­vi­des infor­ma­ti­on through Publi­ca­ti­ons and announce­ments:

  • List of noti­fy­ing bodies (Art. 35 para. 2);

  • List of syste­mical­ly ris­ky GPAIM (Art. 52 para. 6);

  • List of cen­tral cont­act points of the Mem­ber Sta­tes (Art. 70 (2));

  • HRAIS data­ba­se in accordance with Annex III (Art. 71);

  • Report­ing to Par­lia­ment and the Coun­cil (Art. 112).

And final­ly, the Com­mis­si­on Enforce­ment powers at GPAIM:

  • GPAIM are spe­ci­fi­cal­ly regu­la­ted in Chap­ter V. The Com­mis­si­on is tas­ked with enfor­cing the pro­vi­si­ons of this chap­ter; this is regu­la­ted in its own Sec­tion 5 of Chap­ter IX (post-mar­ket sur­veil­lan­ce; exch­an­ge of infor­ma­ti­on and mar­ket sur­veil­lan­ce). It must be kept infor­med accor­din­gly by the mar­ket sur­veil­lan­ce aut­ho­ri­ties (Art. 73 para. 11, Art. 74 para. 2, Art. 77 para. 2, Art. 79 para. 3 ff., Art. 80 para. 3).

  • The Com­mis­si­on can inter­ve­ne if it does not agree with mea­su­res taken by the Mem­ber Sta­tes, AIS or HRAIS with par­ti­cu­lar risks (Art. 81 and Art. 82 para. 4 f.).

  • It is also gene­ral­ly respon­si­ble for enfor­cing Chap­ter V (Art. 88 para. 1). To this end, it can request infor­ma­ti­on from GPAIM pro­vi­ders (Art. 91 para. 1 and 3 and Art. 92 para. 3), appoint experts to assess GPAIMs (Art. 92 para. 2) and requi­re GPAIM pro­vi­ders to com­ply with their obli­ga­ti­ons, take risk miti­ga­ti­on mea­su­res and with­draw a GPAIM from the mar­ket (Art. 93 para. 1).

52 What is the role of the AI Office?

The AI Office (https://dtn.re/cvmxvL)), albeit with a slight­ly dif­fe­rent desi­gna­ti­on, name­ly as “Euro­pean Office for Arti­fi­ci­al Intel­li­gence”; both are the AI Office (AI Eng­lish is cat­ching on). It is part of the Commission’s Direc­to­ra­te-Gene­ral for Com­mu­ni­ca­ti­on Net­works, Con­tent and Tech­no­lo­gy. It has more than 140 employees and is divi­ded into five depart­ments, “Excel­lence in AI and Robo­tics Unit”, “Regu­la­ti­on and Com­pli­ance Unit”, “AI Safe­ty Unit”, “AI Inno­va­ti­on and Poli­cy Coor­di­na­ti­on Unit” and “AI for Socie­tal Good Unit”.

The tasks of the Office are set out in Art. 3 No. 47, Art. 64, other pro­vi­si­ons of the AIA and the afo­re­men­tio­ned reso­lu­ti­on, which lists the­se and other tasks and the powers of the Office. The main tasks are as follows:

  • Coor­di­na­ti­ve tasks (e.g. coope­ra­ti­on with stake­hol­ders, other Com­mis­si­on depart­ments, other EU bodies and with the Mem­ber Sta­tes and their authorities);

  • Tech­ni­cal con­tri­bu­ti­ons (e.g. the moni­to­ring of eco­no­mic and tech­ni­cal deve­lo­p­ments, the draf­ting of gui­de­lines and model con­di­ti­ons [Art. 25, 27 para. 5, 50 para. 7, 53 para. 1 lit. d, 56 and 62 para. 2] and the pre­pa­ra­ti­on of Com­mis­si­on decis­i­ons (Art. 56);

  • the Mar­ket super­vi­si­on on GPAIM and on AIS that a pro­vi­der builds on the basis of its own GPAIM (Art. 88 and Art. 75 and Art. 3 of the afo­re­men­tio­ned Com­mis­si­on Decis­i­on). It checks com­pli­ance with the AEOI by the rele­vant actors and also ser­ves as a point of cont­act for report­ing serious inci­dents (→ 45).

In addi­ti­on, the Office also over­sees the AI Pact (https://dtn.re/WJfwxl).

53 What is the role of the EAIB?

The “Euro­pean Panel on Arti­fi­ci­al Intel­li­gence” (“AI Panel”; also “EAIB”, for “Euro­pean AI Board”, https://dtn.re/QQhGJ7).

It is to advi­se and sup­port the Com­mis­si­on and the mem­ber sta­tes in order to faci­li­ta­te the uni­form and effec­ti­ve appli­ca­ti­on of the AEOI (Art. 66 con­ta­ins a list of its tasks; fur­ther tasks are defi­ned in the AEOI). To this end, it sup­ports the AI Office in the crea­ti­on of prac­ti­cal gui­de­lines, among other things. The EDPB and the Com­mis­si­on often take oppo­sing posi­ti­ons on the appli­ca­ti­on of the GDPR; it remains to be seen whe­ther this will also be the case with the AEOI.

54 What other EU bodies does the AEOI pro­vi­de for?

A Advi­so­ry forum sup­ports the EAIB and the Com­mis­si­on with tech­ni­cal exper­ti­se. The Advi­so­ry Forum is made up of repre­sen­ta­ti­ves from indu­stry, start-ups, SMEs, civil socie­ty and aca­de­mia as well as Euro­pean insti­tu­ti­ons (e.g. the Euro­pean Com­mit­tee for Stan­dar­dizati­on CEN or ENISA) (Art. 67).

In addi­ti­on, the Com­mis­si­on is to set up a sci­en­ti­fic panel of inde­pen­dent experts (Sci­en­ti­fic com­mit­teeSci­en­ti­fic panel of inde­pen­dent experts”). It is made up of inde­pen­dent experts and is inten­ded to sup­port the AI Office in its mar­ket sur­veil­lan­ce acti­vi­ties with sci­en­ti­fic and tech­ni­cal exper­ti­se (Art. 68).

55 What is the role of natio­nal mar­ket sur­veil­lan­ce authorities?

The Mar­ket sur­veil­lan­ce aut­ho­ri­ties (Art. 3 No. 26 and 48) are respon­si­ble for mar­ket sur­veil­lan­ce of HRAIS and GPAIM (Art. 74 ff.). Each Mem­ber Sta­te must appoint at least one such aut­ho­ri­ty (Art. 70 (1)). In the case of regu­la­ted pro­ducts (Art. 6 para. 1), the com­pe­tent aut­ho­ri­ties the­re are gene­ral­ly also the mar­ket sur­veil­lan­ce aut­ho­ri­ty under the AIA (Art. 74 para. 3), in the finan­cial sec­tor the Finan­cial Mar­ket Aut­ho­ri­ty (Art. 74 para. 6), and for EU public bodies it is the EDPS (Art. 74 para. 9). AIS that are based on a self-deve­lo­ped GPAIM (e.g. ChatGPT) are a spe­cial case; here, mar­ket sur­veil­lan­ce lies with the AI Office (→ 52).

The Powers and tasks are based in par­ti­cu­lar on the Mar­ket Sur­veil­lan­ce Ordi­nan­ce (Art. 3 No. 26; Art. 14 ff. of this Ordi­nan­ce; https://dtn.re/QCMYaE) and the requi­re­ments of Art. 70 para. 1. For exam­p­le, they can

  • in the case of a serious inci­dent order that a HRAIS be recal­led or with­drawn from the mar­ket or that it not be made available on the mar­ket (Art. 19 of the Mar­ket Sur­veil­lan­ce Ordinance);

  • request infor­ma­ti­on from pro­vi­ders for their acti­vi­ties at any time (Art. 74 para. 12 and 13 and 75 para. 3), the Obli­ga­ti­ons to coope­ra­te have, and they can

  • You may be able to Test from HRAIS order (Art. 77 para. 3).

  • If an AIS has a spe­cial risk for the health and safe­ty of per­sons, health and safe­ty in the work­place, con­su­mer pro­tec­tion, the envi­ron­ment, public safe­ty and other public inte­rests (Art. 79 para. 1 in con­junc­tion with Art. 3 no. 19 of the Mar­ket Sur­veil­lan­ce Ordi­nan­ce), the com­pe­tent mar­ket sur­veil­lan­ce aut­ho­ri­ty may check the con­for­mi­ty of the AIS con­cer­ned and, if neces­sa­ry, order cor­rec­ti­ve mea­su­res and a recall (Art. 79 para. 5).

  • In the case of AIS, which the pro­vi­der descri­bes as not high-risk the mar­ket sur­veil­lan­ce aut­ho­ri­ty may – if it is of a dif­fe­rent opi­ni­on – order that con­for­mi­ty be estab­lished (Art. 80 para. 1 and 2). It may also order the estab­lish­ment of con­for­mi­ty in the case of com­pli­ant but nevert­hel­ess par­ti­cu­lar­ly ris­ky HRAIS order cor­rec­ti­ve mea­su­res (Art. 82 para. 1). It can also take mea­su­res in the event of for­mal errors, e.g. if a CE mark is miss­ing (Art. 83).

The mar­ket sur­veil­lan­ce aut­ho­ri­ties also have the fol­lo­wing tasks in par­ti­cu­lar under the AIA:

  • Infor­ma­ti­on from the Com­mis­si­on on cer­tain legal pro­vi­si­ons rela­ting to real-time remo­te bio­me­tric iden­ti­fi­ca­ti­on for law enforce­ment pur­po­ses (Art. 5 (4) and (6));

  • Rece­ipt of infor­ma­ti­on and mes­sa­ges, in par­ti­cu­lar the following:

  • the pro­vi­der and ope­ra­tor of HRAIS on par­ti­cu­lar risks (Art. 79 para. 2 and Art. 26 para. 5); the ope­ra­tor of a HRAIS on serious inci­dents (Art. 26 para. 5);

  • Copy of the appoint­ment order and its ter­mi­na­ti­on from repre­sen­ta­ti­ves of non-Euro­pean HRAIS pro­vi­ders (Art. 22 para. 3 and 4);

  • Infor­ma­ti­on on non-com­pli­ant HRAIS from importers (Art. 23 para. 2);

  • Fun­da­men­tal Rights Impact Assess­ments (FRIA) of public bodies (Art. 27 para. 3);

  • Infor­ma­ti­on on tests of HRAIS under real con­di­ti­ons (Art. 60);

  • Reports of serious inci­dents to HRAIS (Art. 73 para. 1);

  • Infor­ma­ti­on from other bodies:

  • natio­nal aut­ho­ri­ties and public bodies in accordance with Art. 77 if a serious inci­dent at a HRAIS has been repor­ted to it (Art. 73 para. 7);

  • the Com­mis­si­on for mea­su­res in the event of serious inci­dents (Art. 19 para. 1 of the Mar­ket Sur­veil­lan­ce Ordinance);

  • annu­al report­ing to the Com­mis­si­on (Art. 74 para. 2);

  • Excep­tio­nal appr­oval of a HRAIS accor­ding to Art. 46;

  • Appr­oval and review of the Tests of HRAIS under real con­di­ti­ons (Art. 60 para. 4 lit. b, Art. 76); if neces­sa­ry, also inter­ve­ne if a serious inci­dent occurs or a test does not com­ply with the appli­ca­ble con­di­ti­ons (Art. 76 para. 3 and 5);

  • Accep­tance of Com­plaints natu­ral or legal per­sons (Art. 85).

56 What is the role of the con­for­mi­ty assess­ment bodies?

Con­for­mi­ty assess­ment bodies car­ry out the Con­for­mi­ty assess­ment (Art. 3 No. 21). They are appoin­ted by the noti­fy­ing aut­ho­ri­ties (Art. 28 para. 1, 29 para. 1 and 30 para. 1 → 57) and must meet the requi­re­ments of Art. 30. In par­ti­cu­lar, they must be inde­pen­dent. Con­for­mi­ty assess­ment bodies in third count­ries may also ope­ra­te under the AEOI, pro­vi­ded that a cor­re­spon­ding agree­ment exists with the EU (Art. 39). A con­for­mi­ty assess­ment body is cal­led “noti­fi­ed body” if it has been noti­fi­ed in accordance with the rele­vant pro­vi­si­ons (Art. 3 No. 22). Con­for­mi­ty assess­ment pro­ce­du­re → 15.

57 What is the role of the noti­fy­ing authorities?

Each Mem­ber Sta­te must appoint a noti­fy­ing aut­ho­ri­ty (Art. 28 (1) and 70 (1)). It is respon­si­ble for estab­li­shing the pro­ce­du­res for the assess­ment, desi­gna­ti­on and noti­fi­ca­ti­on of to set up con­for­mi­ty assess­ment bodies and to moni­tor them (Art. 3 No. 19). They are cal­led “noti­fy­ing aut­ho­ri­ties” becau­se they must inform the Com­mis­si­on (→ 51) and the other Mem­ber Sta­tes about each con­for­mi­ty assess­ment body via a noti­fi­ca­ti­on instru­ment mana­ged by the Com­mis­si­on; only then do the con­for­mi­ty assess­ment bodies beco­me noti­fi­ed bodies and can start their work (→ 56).

Sup­ple­men­ta­ry questions

58 What role does data pro­tec­tion play in the AEOI?

Data pro­tec­tion is of con­sidera­ble importance at AIS, par­ti­cu­lar­ly in con­nec­tion with the trai­ning of GPAIM. The AIA the­r­e­fo­re often refers to the GDPR, in par­ti­cu­lar for terms legal­ly defi­ned the­r­ein (Art. 3 No. 37, 50, 51 and 52) or decla­ra­tively to pro­vi­si­ons of the GDPR (e.g. in Art. 26 para. 9 for the use of the ope­ra­ting ins­truc­tions in a data pro­tec­tion impact assess­ment or in Art. 50 para. 3 for the infor­ma­ti­on of data sub­jects), but cla­ri­fi­es that the GDPR applies wit­hout rest­ric­tion to the pro­ce­s­sing of per­so­nal data (Art. 2 para. 7, 10 para. 5).

Art. 10 para. 5 con­ta­ins the only spe­cial Legal basis in the AIA. The­re is a con­flict of objec­ti­ves bet­ween data mini­mizati­on and the rele­van­ce of the trai­ning data. The AEOI resol­ves this con­flict by allo­wing even par­ti­cu­lar­ly sen­si­ti­ve per­so­nal data to be pro­ce­s­sed in excep­tio­nal cases if this is abso­lut­e­ly neces­sa­ry when trai­ning a HRAIS in order to iden­ti­fy and redu­ce bias (more pre­cis­e­ly: the pro­hi­bi­ti­on in Art. 9 para. 1 GDPR is lifted in this respect; a legal basis under Art. 6 remains neces­sa­ry; ECJ, Case C‑667/21, https://dtn.re/ATzHFf). Howe­ver, the spe­cial con­di­ti­ons pur­su­ant to Art. 10 para. 5 lit. a‑f must be observed.

More important than this que­sti­on is the dis­cus­sion about the scope of appli­ca­ti­on of data pro­tec­tion law to LLM trai­ning and more broad­ly when within the scope of an LLM Per­so­nal data which par­ty plays which role and how the rights of data sub­jects can be ensu­red. A dis­cus­sion on this is curr­ent­ly taking place. Refe­rence should be made in par­ti­cu­lar to the fol­lo­wing docu­ments and state­ments (in chro­no­lo­gi­cal order):

The posi­ti­on taken by the HmbBfDI in par­ti­cu­lar that an LLM can­not con­tain per­so­nal data becau­se it does not copy input data, but mathe­ma­ti­cal­ly repres­ents rela­ti­on­ships bet­ween tokens as vec­tors or ten­sors, falls short becau­se the aggre­ga­te sta­te of per­so­nal data is not rele­vant: If per­so­nal infor­ma­ti­on is not stored as such, but in the form of mathe­ma­ti­cal rela­ti­on­ships in a form that can be repro­du­ced in prin­ci­ple, this is a pro­ce­s­sing of per­so­nal data (cf. datenrecht.ch, https://dtn.re/BuTaCE). The que­sti­on of how data sub­jects’ rights can be imple­men­ted in LLMs, for exam­p­le, is the­r­e­fo­re not irrelevant.

Data pro­tec­tion aut­ho­ri­ties have also taken a posi­ti­on out­side the issue of the per­so­nal refe­rence of embed­dings (→ 12) on the Rela­ti­on­ship bet­ween data pro­tec­tion and arti­fi­ci­al intel­li­gence expres­sed, for example:

  • EDSA, State­ment 3/2024 on data pro­tec­tion aut­ho­ri­ties’ role in the Arti­fi­ci­al Intel­li­gence Act frame­work, July 16, 2024 (https://dtn.re/vGUUWh)

  • DSK, Gui­dance on Arti­fi­ci­al Intel­li­gence and Data Pro­tec­tion, May 6, 2024 (https://dtn.re/S63kDn)

  • BayL­DA, in the 29th Acti­vi­ty Report 2019 (https://dtn.re/rg7FEr)

  • ICO, various infor­ma­ti­on on AI topics (https://dtn.re/g91v0E)

  • AustriaFAQ on the topic of AI and data pro­tec­tion of the Austri­an DPA, July 2, 2024 (https://dtn.re/Sz4sDS)

  • FranceCNIL, Self-assess­ment gui­de for arti­fi­ci­al intel­li­gence (AI) systems (https://dtn.re/44hM5n)

  • Ita­lyGaran­te, Infor­ma­ti­on on the pro­tec­tion of per­so­nal data against scra­ping, March 20, 2024 (https://dtn.re/TuzT85)

  • Switz­er­land: See → 63

Seve­ral Euro­pean data pro­tec­tion super­vi­so­ry aut­ho­ri­ties (“SAs”) had also Inve­sti­ga­ti­ons against Ope­nAI initia­ted in con­nec­tion with ChatGPT. The EDPB set up a cor­re­spon­ding task force in April 2023, who­se work is still ongo­ing; a brief inte­rim report was published on May 23, 2024 (https://dtn.re/HyvPHo).

59 How does the AI Act deal with copyrights?

In the area of copy­right law, the AIA reco­gnizes the pro­blem of trai­ning with pro­tec­ted works. It does not com­ment on the con­tent of this pro­blem, but requi­res the pro­vi­ders of GPAIM among other things, to have a stra­tegy for com­pli­ance with EU copy­right law and to publish a sum­ma­ry of the trai­ning data (→ 39).

Other­wi­se, howe­ver, the allo­ca­ti­on of exclu­si­ve rights and the deter­mi­na­ti­on of their scope and cor­re­spon­ding limi­ta­ti­ons is left to the rele­vant pro­vi­si­ons. In this con­text, it is pri­ma­ri­ly dis­cus­sed under which con­di­ti­ons the use of copy­righ­ted works for the Trai­ning of an LLM is inf­rin­ging – an under­stan­da­ble dis­cus­sion, sin­ce LLMs com­pe­te in par­ti­cu­lar with the crea­ti­ves who­se works they have been trai­ned with.

The Prin­ci­ple of ter­ri­to­ri­a­li­tyWhe­ther an act is copy­right inf­rin­ging is deter­mi­ned by the law of the coun­try in which it is loca­ted (for Swiss con­flict of laws: Art. 110 IRPG). In the EU, this is the copy­right law of the indi­vi­du­al mem­ber sta­tes. Howe­ver, see → 40 on the que­sti­on of whe­ther the requi­re­ment for GPAIM pro­vi­ders out­side the EU must com­ply with the pro­vi­si­ons of the AIA on copy­right strategy.

In Switz­er­land, the rele­vant regu­la­ti­ons can be found in the CopA. In par­ti­cu­lar, the scope of the limi­ta­ti­on pro­vi­si­ons is unclear, wher­eby a distinc­tion must be made bet­ween the pro­cu­re­ment of copy­righ­ted mate­ri­al and its use for trai­ning purposes:

Pro­cu­re­ment and the gene­ral­ly asso­cia­ted Dupli­ca­ti­on of mate­ri­al is, in con­trast to the enjoy­ment of a work such as sear­ching through text or labe­l­ing for super­vi­sed lear­ning (→ 10), rele­vant under copy­right law (as long as the con­cept of repro­duc­tion is not limi­t­ed to acts that are inten­ded to make the work per­cep­ti­ble). If the­re is no Licen­se – which can be express­ly or taci­t­ly gran­ted – the que­sti­on the­r­e­fo­re ari­ses as to whe­ther the limi­ta­ti­on of per­so­nal use pur­su­ant to Art. 19 para. 1 CopA applies. Here the­re is curr­ent­ly Legal uncer­tain­ty:

  • One of the issues being dis­cus­sed is whe­ther trai­ning is cover­ed by the repro­duc­tion and pro­vi­si­on for the “inter­nal infor­ma­ti­on or docu­men­ta­ti­on” (Art. 19 para. 1 lit. c CopA). Sin­ce such repro­duc­tion is essen­ti­al­ly only exempt­ed for non-com­mer­cial pur­po­ses and the­r­e­fo­re should not gene­ral­ly cover the trai­ning of an LLM and sin­ce the repro­duc­tion of com­mer­ci­al­ly available copies of works is not cover­ed (Art. 19 para. 3 lit. a CopA), this rest­ric­tion will often not apply.

  • Also under dis­cus­sion is so-cal­led “Text and data mining” (TDM), which exempts repro­duc­tion for sci­en­ti­fic pur­po­ses if it is tech­ni­cal­ly con­di­tio­ned, e.g. through the seman­tic ana­ly­sis of the source mate­ri­al (Art. 24d CopA). Alt­hough the con­cept of sci­ence is broad, applied rese­arch by pri­va­te com­pa­nies also requi­res a serious cogni­ti­ve pur­po­se. Whe­ther the fact that a trai­ned LLM can be used for dif­fe­rent pur­po­ses is suf­fi­ci­ent to attri­bu­te the requi­red cogni­ti­ve pur­po­se to the trai­ning is uncer­tain; in any case, it is not enough that a trai­ned LLM can be used for rese­arch pur­po­ses, the rese­arch pur­po­se would have to encom­pass the training.

In addi­ti­on, the pro­cu­re­ment of the works used must be lawful (Art. 24d CopA), which can­not be affirm­ed (or denied) across the board in the case of publicly available works, for example.

  • One only Vola­ti­le dupli­ca­ti­on would be exempt as long as it is only eph­eme­ral or ancil­la­ry, con­sti­tu­tes an inte­gral and essen­ti­al part of a tech­ni­cal pro­cess, ser­ves exclu­si­ve­ly for trans­mis­si­on in a net­work bet­ween third par­ties by an inter­me­dia­ry or a lawful use and has no inde­pen­dent eco­no­mic signi­fi­can­ce (Art. 24a CopA). The­se requi­re­ments – which app­ly cumu­la­tively – are unli­kely to app­ly to the com­pi­la­ti­on of a trai­ning, test and/or vali­da­ti­on data set (and hard­ly to the trai­ning pro­cess its­elf, which has con­sidera­ble eco­no­mic signi­fi­can­ce). Art. 24a CopA is the­r­e­fo­re also hard­ly a basis for the enti­re trai­ning pro­cess with copy­righ­ted material.

For its part, the out­put is hard­ly pro­tec­ted by copy­right, becau­se an intellec­tu­al, i.e. human crea­ti­on is miss­ing (Art. 2 para. 1 CopA); in any case, unless the out­put was demon­stra­b­ly pro­vi­ded by a natu­ral per­son. For the same rea­son, an AI can­not be an inven­tor in the sen­se of patent law – here too, pro­tec­tion pre­sup­po­ses that the inven­tor is a human being.

60 What applies when using AI in the workplace?

The AIA con­ta­ins a few pro­vi­si­ons spe­ci­fi­cal­ly rela­ted to the use of AIS in the work context:

  • The use of an AIS is pro­hi­bi­ted in a few cases accor­ding to Art. 5 (→ 27). This may be the case in the work­place, e.g. for emo­ti­on reco­gni­ti­on in the work­place, if the vul­nerabi­li­ty of employees is to be exploi­ted or if social scoring would take place;

  • The term HRAIS inclu­des work­place-rela­ted use cases (→ 28), for exam­p­le when AIS is used to mana­ge access to voca­tio­nal trai­ning and fur­ther edu­ca­ti­on, is used in the recruit­ment or sel­ec­tion of job appli­ca­ti­ons or is used in decis­i­ons on working con­di­ti­ons, pro­mo­ti­ons or dis­mis­sals. job appli­ca­ti­ons or is used for decis­i­ons on working con­di­ti­ons, pro­mo­ti­ons or dismissals.

  • Befo­re com­mis­sio­ning or using a HRAIS in the work­place, the ope­ra­tor must inform the employee repre­sen­ta­ti­ves and the affec­ted employees that they will be “sub­ject to the use of the high-risk AI system” (Art. 26 para. 7 AIA).

  • Infor­ma­ti­on must also be pro­vi­ded if a HRAIS is used – also, but not only in the work con­text – to make or sup­port decis­i­ons (Art. 26 para. 11 AIA).

Other­wi­se, howe­ver, the pro­tec­tion of employees and appli­cants is left to the other pro­vi­si­ons of the appli­ca­ble law, in par­ti­cu­lar data pro­tec­tion law and public employment law, which may pro­vi­de for par­ti­ci­pa­ti­on rights.

Howe­ver, legis­la­ti­ve pro­jects are under­way in the EU to impro­ve the pro­tec­tion of employees. The EU’s draft Plat­form Direc­ti­ve (https://dtn.re/G3ytlM), the appr­oval of the Coun­cil is still pending.

61 Which inter­na­tio­nal stan­dards affect AI?

Seve­ral stan­dards and stan­dar­dizati­on initia­ti­ves deal with AI. The Inter­na­tio­nal Orga­nizati­on for Stan­dar­dizati­on (ISO) and the Inter­na­tio­nal Elec­tro­tech­ni­cal Com­mis­si­on (IEC) have joint­ly deve­lo­ped standards:

  • ISO/IEC 42001:2023 (https://dtn.re/L8KOIs): Requi­re­ments for AI manage­ment systems

  • ISO/IEC TR 24028:2020 (https://dtn.re/YYy0Ha): Trust­wort­hi­ness of AI systems, cri­te­ria for trans­pa­ren­cy, con­trol and explainability

  • ISO/IEC 5259 – 1: Basis of the ISO 5259 series regar­ding data qua­li­ty for ana­ly­ses and ML (https://dtn.re/TggI5G)

  • ISO/IEC TR 5469:2024: Use of AI in safe­ty-rela­ted func­tions (https://dtn.re/vbc8IL)

In Euro­pe, CEN (Euro­pean Com­mit­tee for Stan­dar­dizati­on) and CENELEC (Euro­pean Com­mit­tee for Elec­tro­tech­ni­cal Stan­dar­dizati­on) are invol­ved in the deve­lo­p­ment of AI stan­dards via the joint com­mit­tee CEN-CENELEC JTC 21 “Arti­fi­ci­al Intel­li­gence”. It has published seve­ral stan­dards, and more are being deve­lo­ped (https://dtn.re/Gx0XMT). Published were for example:

  • CEN/CLC ISO/IEC/TR 24027:2023: Bias

  • CEN/CLC ISO/IEC/TR 24029 – 1:2023: Assess­ment of the robust­ness of neu­ral networks

The Ame­ri­can Natio­nal Insti­tu­te of Stan­dards and Tech­no­lo­gy (NIST) then deve­lo­ped an AI risk manage­ment frame­work, the AI RMF 1.0, published in Janu­ary 2023, which was sub­se­quent­ly sup­ple­men­ted by “pro­files”, imple­men­ta­ti­ons for spe­ci­fic cir­cum­stances, appli­ca­ti­ons or tech­no­lo­gies. One exam­p­le is NIST AI 600 – 1 “Arti­fi­ci­al Intel­li­gence Risk Manage­ment Frame­work: Gene­ra­ti­ve Arti­fi­ci­al Intel­li­gence Pro­fi­le” (https://dtn.re/z3H7BJ).

62 What is the Coun­cil of Euro­pe AI Convention?

On May 17, 2024, the Coun­cil of Euro­pe (not the Coun­cil of the Euro­pean Uni­on) adopted the AI Con­ven­ti­on of the Coun­cil of Euro­pe (Euro­pe Frame­work Con­ven­ti­on on Arti­fi­ci­al Intel­li­gence and Human Rights, Demo­cra­cy and the Rule of Law, AI Con­ven­ti­on). The text of the AI Con­ven­ti­on is available tog­e­ther with the Expl­ana­to­ry Report on datenrecht.ch (https://dtn.re/8zndsz), in English.

The con­ven­ti­on is a frame­work agree­ment to be imple­men­ted by the rati­fy­ing sta­tes – of which Switz­er­land will cer­tain­ly be one – which is inten­ded to ensu­re stan­dards with regard to human rights, demo­cra­cy and the rule of law when using AI systems.

Mem­bers and non-mem­bers of the Coun­cil of Euro­pe are now invi­ted to sign and rati­fy the Frame­work Con­ven­ti­on. If Switz­er­land rati­fi­es the Con­ven­ti­on, it must trans­po­se it into Swiss law (→ 63).

The requi­re­ments of the AI Con­ven­ti­on are very vague. In addi­ti­on, it only binds the mem­ber sta­tes when legis­la­ting in the public sec­tor. In the pri­va­te sec­tor, the mem­ber sta­tes are only requi­red to do so in a way that is “com­pa­ti­ble with the object and pur­po­se” of the AI Con­ven­ti­on (Art. 3 para. 1 of the AI Convention).

63 How does Switz­er­land regu­la­te the use of arti­fi­ci­al intelligence?

The­re is curr­ent­ly no over­ar­ching regu­la­ti­on of the use of arti­fi­ci­al intel­li­gence in Switz­er­land. At the end of 2023, the Fede­ral Coun­cil ins­truc­ted DETEC to deve­lop an EU digi­tal poli­cy as part of the inter­de­part­ment­al coor­di­na­ti­on group Pos­si­ble approa­ches until the end of 2024 for regu­la­ti­on (see the media release, https://dtn.re/uV1Eau). DETEC, or OFCOM on its behalf, should start from the appli­ca­ble law and find regu­la­to­ry approa­ches that are com­pa­ti­ble with both the AEOI and the AI Con­ven­ti­on (→ 62).

By the end of 2024, OFCOM’s ana­ly­sis, inclu­ding the basic stu­dies pre­pared for this pur­po­se, e.g. on regu­la­to­ry gaps in cur­rent law, and the Fede­ral Council’s decis­i­on on the direc­tion to be taken should be available.

Howe­ver, it is curr­ent­ly unclear which approa­ches DETEC is pro­po­sing and which will ulti­m­ate­ly pre­vail. One Full adop­ti­on of the AIA is unli­kely to have much of a poli­ti­cal chan­ce as long as the EU does not make this a con­di­ti­on for par­ti­ci­pa­ti­on in the sin­gle mar­ket, and the AI Con­ven­ti­on is so vague that its con­tent hard­ly anti­ci­pa­tes regu­la­ti­on, espe­ci­al­ly not in the pri­va­te sec­tor (→ 62). The busi­ness com­mu­ni­ty (but also aca­de­mia) is insi­sting on lean regu­la­ti­ons, while civil socie­ty orga­nizati­ons are cal­ling for stric­ter pro­vi­si­ons, par­ti­cu­lar­ly to pro­tect against dis­cri­mi­na­ti­on (e.g. Algo­rithm Watch). The most obvious opti­on at pre­sent would appear to be a blan­ket decree that sel­ec­tively adapts the rele­vant legal bases.

Various poli­ti­cal initia­ti­ves are also pen­ding, such as the fol­lo­wing (at fede­ral level):

  • 24.3796, Moti­on Glätt­li, June 14, 2024, Trans­pa­rent risk-based impact assess­ments for the use of AI and algo­rith­ms by the fede­ral govern­ment (https://dtn.re/vWwoDP)

  • 24.3795, Moti­on Glätt­li, June 14, 2024, Pro­tec­tion against dis­cri­mi­na­ti­on in the use of AI and algo­rith­ms (https://dtn.re/B46Qtc)

  • 24.3611, Inter­pel­la­ti­on Cot­tier, June 13, 2024, Arti­fi­ci­al Intel­li­gence. Admi­ni­stra­ti­ve coor­di­na­ti­on and inten­ti­ons regar­ding the new Coun­cil of Euro­pe Frame­work Con­ven­ti­on (https://dtn.re/hdDPxQ)

  • 24.3616, Inter­pel­la­ti­on Gös­si, June 13, 2024, Media and arti­fi­ci­al intel­li­gence (https://dtn.re/JaEh4n)

  • 24.3415, Inter­pel­la­ti­on Tschopp, April 17, 2024, Plat­forms and AI: Users’ rights (https://dtn.re/HBZFOE)

  • 24.3363, Moti­on Chap­puis, March 15, 2024, For a sove­reign digi­tal infras­truc­tu­re in Switz­er­land in the age of arti­fi­ci­al intel­li­gence (https://dtn.re/s4SsC9)

  • 24.3346, Inter­pel­la­ti­on DocourtMarch 15, 2024, EU direc­ti­ve on plat­form work. Does Switz­er­land want to fol­low suit? (https://dtn.re/UNvBOq)

  • 24.3235, Inter­pel­la­ti­on Mar­ti, March 14, 2024, Arti­fi­ci­al intel­li­gence and the impact on copy­right (https://dtn.re/jpX0Cg)

  • 24.3209, Moti­on Juil­lard, March 14, 2024, For a sove­reign digi­tal infras­truc­tu­re in Switz­er­land in the age of arti­fi­ci­al intel­li­gence (AI) (https://dtn.re/NsqdKN)

  • 23.4517, Inter­pel­la­ti­on Gug­ger, Decem­ber 22, 2023, Arti­fi­ci­al intel­li­gence and par­ti­ci­pa­ti­on. Are the­re gaps in the law? (https://dtn.re/hl1Q54)

  • 23.4492, Moti­on Gysi, Decem­ber 22, 2023, Arti­fi­ci­al intel­li­gence in the work­place. Streng­thening the par­ti­ci­pa­ti­on rights of employees (https://dtn.re/PH8ab1)

  • 23.4051, Inter­pel­la­ti­on Schlat­ter, Sep­tem­ber 29, 2023, Arti­fi­ci­al intel­li­gence and robo­tics. Ethics belongs in edu­ca­ti­on! (https://dtn.re/PMNgtC)

  • 23.393, Inter­pel­la­ti­on Cot­tierJune 16, 2023, Arti­fi­ci­al intel­li­gence. What frame­work con­di­ti­ons need to be crea­ted to make the most of it and avo­id unde­si­ra­ble deve­lo­p­ments? (https://dtn.re/FXxB9v)

  • 23.3812, Inter­pel­la­ti­on Wid­mer, June 15, 2023, Arti­fi­ci­al Intel­li­gence. Dan­gers and poten­ti­als for demo­cra­cy (https://dtn.re/ZkaTUc)

  • 23.4133, Inter­pel­la­ti­on Mar­ti, Sep­tem­ber 28, 2023, Algo­rith­mic dis­cri­mi­na­ti­on. Is the legal pro­tec­tion against dis­cri­mi­na­ti­on suf­fi­ci­ent? (https://dtn.re/xr97Zq)

  • 23.3849, Moti­on Ben­da­han, June 15, 2023, Crea­te a com­pe­tence cen­ter or com­pe­tence net­work for arti­fi­ci­al intel­li­gence in Switz­er­land (https://dtn.re/sqLWYa)

  • 23.3654, Inter­pel­la­ti­on Rini­ker, June 13, 2023, Switzerland’s role in inter­na­tio­nal coope­ra­ti­on in the field of arti­fi­ci­al intel­li­gence (https://dtn.re/sUoUb3)

  • 23.3806, Moti­on Mar­ti, June 15, 2023, Decla­ra­ti­on obli­ga­ti­on for arti­fi­ci­al intel­li­gence appli­ca­ti­ons and auto­ma­ted decis­i­on-making systems (https://dtn.re/D3FmNo)

  • 23.3563, Moti­on Maha­im, May 4, 2023, regu­la­te deepf­akes (https://dtn.re/kwNWvh)

  • 23.3516, Inter­pel­la­ti­on Fel­ler, May 2, 2023, Gene­ral or tem­po­ra­ry ban on cer­tain arti­fi­ci­al intel­li­gence plat­forms (https://dtn.re/Ig8JPJ)

  • 23.3201, Postu­la­te Dobler, March 16, 2023, Legal situa­ti­on of arti­fi­ci­al intel­li­gence. Cla­ri­fy uncer­tain­ties, pro­mo­te inno­va­ti­on! (https://dtn.re/e7sGlM)

  • 23.3147, Inter­pel­la­ti­on Ben­da­han, March 14, 2023, Regu­la­ti­on of arti­fi­ci­al intel­li­gence in Switz­er­land (https://dtn.re/xMVLIE)

  • 21.4406, Postu­la­te Mar­ti, Decem­ber 9, 2021, Report on the regu­la­ti­on of auto­ma­ted decis­i­on-making systems (https://dtn.re/PQbXqs)

  • 21.3206, Inter­pel­la­ti­on Poin­tet, March 17, 2021, Which sta­te pro­ce­s­ses rely on arti­fi­ci­al intel­li­gence? (https://dtn.re/WUw9Hr)

  • 21.3012, Postu­la­te Secu­ri­ty Poli­cy Com­mis­si­on, Janu­ary 15, 2021, Clear rules for auto­no­mous wea­pons and arti­fi­ci­al intel­li­gence (https://dtn.re/duRhvk)

  • 19.3919, Inter­pel­la­ti­on Rik­lin, June 21, 2019, Arti­fi­ci­al intel­li­gence and digi­tal trans­for­ma­ti­on. We need a holi­stic stra­tegy (https://dtn.re/5x93tL)

Of cour­se, the other­wi­se appli­ca­ble pro­vi­si­ons also app­ly to the use of AI. This applies, for exam­p­le, to

  • the Data pro­tec­tion law (if per­so­nal data is pro­ce­s­sed during trai­ning or deployment),

  • the Sec­re­cy law (if secret infor­ma­ti­on is used for trai­ning or as input),

  • the Employment con­tract law (if per­so­nal data of appli­cants and employees are pro­ce­s­sed and if an AI affects the employer’s duty of care),

  • the public labor law (e.g. if obli­ga­ti­ons to coope­ra­te app­ly or beha­vi­oral moni­to­ring is under discussion),

  • the Per­so­nal rights (e.g. when con­ver­sa­ti­ons or team calls are recorded),

  • the Fair tra­ding law (when AI-gene­ra­ted con­tent can be misleading),

  • the Copy­right (e.g. when an AI is trai­ned with works or works are used as input, and when the pro­tec­tion of out­put is under discussion),

  • the Cri­mi­nal Law (for recor­dings of non-public con­ver­sa­ti­ons or gene­ral­ly when using AI for cri­mi­nal behavior),

  • the pro­duct lia­bi­li­ty and other Lia­bi­li­ty law,
  • Other are­as of law.

Sec­to­ral regu­la­ti­ons may also be affec­ted. A few super­vi­so­ry aut­ho­ri­ties have alre­a­dy for­mu­la­ted expec­ta­ti­ons, in par­ti­cu­lar FINMA (https://dtn.re/bOT1Ez).

Pri­va­te actors have also adopted rules in the mean­ti­me. This applies abo­ve all to par­ti­cu­lar­ly expo­sed play­ers such as

  • media (the SRG Jour­na­li­stic Gui­de­lines (https://dtn.re/f1UTYZ),

  • poli­ti­cal par­ties (e.g. with the AI Code of the Greens, the GLP, the SP, the Cen­ter Par­ty and the EPP, https://dtn.re/1te4U8) or

  • rese­arch and edu­ca­ti­on (e.g. with the recom­men­da­ti­ons for deal­ing with gene­ra­ti­ve arti­fi­ci­al intel­li­gence at UZH, https://dtn.re/aBstLV).

Num­e­rous pri­va­te com­pa­nies have also issued or are in the pro­cess of issuing gui­de­lines, codes and ins­truc­tions, some of which are public and some non-public.

Appen­dix: Defi­ned terms

Terms defi­ned in Art. 3 AEOI
Eng­lish Ger­man
1 AI system AI system
2 Risk Risk
3 Pro­vi­der Pro­vi­der
4 Deployer Ope­ra­tor
5 Aut­ho­ri­zed representative Aut­ho­ri­zed representative
6 Importer Importer
7 Dis­tri­bu­tor Retail­er
8 Ope­ra­tor Actor
9 Pla­cing on the market Pla­cing on the market
10 Making available on the market Pro­vi­si­on on the market
11 Put­ting into service Com­mis­sio­ning
12 Inten­ded purpose Inten­ded use
13 Rea­son­ab­ly fore­seeable misuse Rea­son­ab­ly fore­seeable misuse
14 Safe­ty component Safe­ty component
15 Ins­truc­tions for use Ope­ra­ting instructions
16 Recall of an AI system Recall of an AI system
17 With­dra­wal of an AI system With­dra­wal of an AI system
18 Per­for­mance of an AI system Per­for­mance of an AI system
19 Noti­fy­ing authority Noti­fy­ing authority
20 Con­for­mi­ty assessment Con­for­mi­ty assessment
21 Con­for­mi­ty assess­ment body Con­for­mi­ty assess­ment body
22 Noti­fi­ed body Noti­fi­ed body
23 Sub­stan­ti­al modification Signi­fi­cant change
24 CE mar­king CE mar­king
25 Post-mar­ket moni­to­ring system Post-mar­ket sur­veil­lan­ce system
26 Mar­ket sur­veil­lan­ce authority Mar­ket sur­veil­lan­ce authority
27 Har­mo­ni­zed standard Har­mo­ni­zed standard
28 Com­mon specification Com­mon specification
29 Trai­ning data Trai­ning data
30 Vali­da­ti­on data Vali­da­ti­on data
31 Vali­da­ti­on data set Vali­da­ti­on data set
32 Test­ing data Test data
33 Input data Input data
34 Bio­me­tric data Bio­me­tric data
35 Bio­me­tric identification Bio­me­tric identification
36 Bio­me­tric verification Bio­me­tric verification
37 Spe­cial cate­go­ries of per­so­nal data Spe­cial cate­go­ries of per­so­nal data
38 Sen­si­ti­ve ope­ra­tio­nal data Sen­si­ti­ve ope­ra­tio­nal data
39 Emo­ti­on reco­gni­ti­on system Emo­ti­on reco­gni­ti­on system
40 Bio­me­tric cate­go­rizati­on system System for bio­me­tric categorization
41 Remo­te bio­me­tric iden­ti­fi­ca­ti­on system Bio­me­tric remo­te iden­ti­fi­ca­ti­on system
42 Real-time remo­te bio­me­tric iden­ti­fi­ca­ti­on system Bio­me­tric real-time remo­te iden­ti­fi­ca­ti­on system
43 Post-remo­te bio­me­tric iden­ti­fi­ca­ti­on system System for sub­se­quent bio­me­tric remo­te identification
44 Publicly acce­s­si­ble space Publicly acce­s­si­ble space
45 Law enforce­ment authority Law enforce­ment agency
46 Law enforce­ment Pro­se­cu­ti­on
47 AI Office Office for Arti­fi­ci­al Intelligence
48 Natio­nal com­pe­tent authority Com­pe­tent natio­nal authority
49 Serious inci­dent Serious inci­dent
50 Per­so­nal data Per­so­nal data
51 Non-per­so­nal data Non-per­so­nal data
52 Pro­fil­ing Pro­fil­ing
53 Real-world test­ing plan Plan for a test under real conditions
54 Sand­box plan Plan for the real laboratory
55 AI regu­la­to­ry sandbox AI real laboratory
56 AI liter­a­cy AI com­pe­tence
57 Test­ing in real-world conditions Test under real conditions
58 Sub­ject Test par­ti­ci­pant
59 Infor­med consent Infor­med consent
60 Deep fake Deepf­ake
61 Wide­spread infringement Wide­spread violation
62 Cri­ti­cal infrastructure Cri­ti­cal infrastructures
63 Gene­ral-pur­po­se AI model AI model with gene­ral purpose
64 High-impact capa­bi­li­ties Skills with high impact
65 Syste­mic risk Syste­mic risk
66 Gene­ral-pur­po­se AI system AI system with gene­ral purpose
67 Floa­ting-point operation Floa­ting point operation
68 Down­stream provider Down­stream provider