datenrecht.ch

AI Act (KI-Ver­ord­nung)

Text des AI Act in der fina­len, vom Par­la­ment am 21. Mai 2024 ver­ab­schie­de­ten Fas­sung. Die Zuord­nung der Erwä­gungs­grün­de ist ein Vor­schlag als Lese­hil­fe und weder ver­bind­lich noch immer ein­deu­tig möglich.


Eine Fas­sung als PDF fin­det sich hier.


aus­klap­pen | ein­klap­pen

Inhaltsübersicht 

Chap­ter I Gene­ral provisions

Gene­ral Recitals

(1) The pur­po­se of this Regu­la­ti­on is to impro­ve the func­tio­ning of the inter­nal mar­ket by lay­ing down a uni­form legal frame­work in par­ti­cu­lar for the deve­lo­p­ment, the pla­cing on the mar­ket, the put­ting into ser­vice and the use of arti­fi­ci­al intel­li­gence systems (AI systems) in the Uni­on, in accordance with Uni­on values, to pro­mo­te the upt­ake of human cen­tric and trust­wor­t­hy arti­fi­ci­al intel­li­gence (AI) while ensu­ring a high level of pro­tec­tion of health, safe­ty, fun­da­men­tal rights as enshri­ned in the Char­ter of Fun­da­men­tal Rights of the Euro­pean Uni­on (the ‘Char­ter’), inclu­ding demo­cra­cy, the rule of law and envi­ron­men­tal pro­tec­tion, to pro­tect against the harmful effects of AI systems in the Uni­on, and to sup­port inno­va­ti­on. This Regu­la­ti­on ensu­res the free move­ment, cross-bor­der, of AI-based goods and ser­vices, thus pre­ven­ting Mem­ber Sta­tes from impo­sing rest­ric­tions on the deve­lo­p­ment, mar­ke­ting and use of AI systems, unless expli­ci­t­ly aut­ho­ri­sed by this Regulation.

(2) This Regu­la­ti­on should be applied in accordance with the values of the Uni­on enshri­ned as in the Char­ter, faci­li­ta­ting the pro­tec­tion of natu­ral per­sons, under­ta­kings, demo­cra­cy, the rule of law and envi­ron­men­tal pro­tec­tion, while boo­sting inno­va­ti­on and employment and making the Uni­on a lea­der in the upt­ake of trust­wor­t­hy AI.

(3) AI systems can be easi­ly deployed in a lar­ge varie­ty of sec­tors of the eco­no­my and many parts of socie­ty, inclu­ding across bor­ders, and can easi­ly cir­cu­la­te throug­hout the Uni­on. Cer­tain Mem­ber Sta­tes have alre­a­dy explo­red the adop­ti­on of natio­nal rules to ensu­re that AI is trust­wor­t­hy and safe and is deve­lo­ped and used in accordance with fun­da­men­tal rights obli­ga­ti­ons. Diver­ging natio­nal rules may lead to the frag­men­ta­ti­on of the inter­nal mar­ket and may decrea­se legal cer­tain­ty for ope­ra­tors that deve­lop, import or use AI systems. A con­si­stent and high level of pro­tec­tion throug­hout the Uni­on should the­r­e­fo­re be ensu­red in order to achie­ve trust­wor­t­hy AI, while diver­gen­ces ham­pe­ring the free cir­cula­ti­on, inno­va­ti­on, deployment and the upt­ake of AI systems and rela­ted pro­ducts and ser­vices within the inter­nal mar­ket should be pre­ven­ted by lay­ing down uni­form obli­ga­ti­ons for ope­ra­tors and gua­ran­te­e­ing the uni­form pro­tec­tion of over­ri­ding rea­sons of public inte­rest and of rights of per­sons throug­hout the inter­nal mar­ket on the basis of Artic­le 114 of the Trea­ty on the Func­tio­ning of the Euro­pean Uni­on (TFEU). To the ext­ent that this Regu­la­ti­on con­ta­ins spe­ci­fic rules on the pro­tec­tion of indi­vi­du­als with regard to the pro­ce­s­sing of per­so­nal data con­cer­ning rest­ric­tions of the use of AI systems for remo­te bio­me­tric iden­ti­fi­ca­ti­on for the pur­po­se of law enforce­ment, of the use of AI systems for risk assess­ments of natu­ral per­sons for the pur­po­se of law enforce­ment and of the use of AI systems of bio­me­tric cate­go­ri­sa­ti­on for the pur­po­se of law enforce­ment, it is appro­pria­te to base this Regu­la­ti­on, in so far as tho­se spe­ci­fic rules are con­cer­ned, on Artic­le 16 TFEU. In light of tho­se spe­ci­fic rules and the recour­se to Artic­le 16 TFEU, it is appro­pria­te to con­sult the Euro­pean Data Pro­tec­tion Board.

(4) AI is a fast evol­ving fami­ly of tech­no­lo­gies that con­tri­bu­tes to a wide array of eco­no­mic, envi­ron­men­tal and socie­tal bene­fits across the enti­re spec­trum of indu­stries and social acti­vi­ties. By impro­ving pre­dic­tion, opti­mi­sing ope­ra­ti­ons and resour­ce allo­ca­ti­on, and per­so­na­li­sing digi­tal solu­ti­ons available for indi­vi­du­als and orga­ni­sa­ti­ons, the use of AI can pro­vi­de key com­pe­ti­ti­ve advan­ta­ges to under­ta­kings and sup­port soci­al­ly and envi­ron­men­tal­ly bene­fi­ci­al out­co­mes, for exam­p­le in heal­th­ca­re, agri­cul­tu­re, food safe­ty, edu­ca­ti­on and trai­ning, media, sports, cul­tu­re, infras­truc­tu­re manage­ment, ener­gy, trans­port and logi­stics, public ser­vices, secu­ri­ty, justi­ce, resour­ce and ener­gy effi­ci­en­cy, envi­ron­men­tal moni­to­ring, the con­ser­va­ti­on and resto­ra­ti­on of bio­di­ver­si­ty and eco­sy­stems and cli­ma­te chan­ge miti­ga­ti­on and adaptation.

(5) At the same time, depen­ding on the cir­cum­stances regar­ding its spe­ci­fic appli­ca­ti­on, use, and level of tech­no­lo­gi­cal deve­lo­p­ment, AI may gene­ra­te risks and cau­se harm to public inte­rests and fun­da­men­tal rights that are pro­tec­ted by Uni­on law. Such harm might be mate­ri­al or imma­te­ri­al, inclu­ding phy­si­cal, psy­cho­lo­gi­cal, socie­tal or eco­no­mic harm. 

(6) Given the major impact that AI can have on socie­ty and the need to build trust, it is vital for AI and its regu­la­to­ry frame­work to be deve­lo­ped in accordance with Uni­on values as enshri­ned in Artic­le 2 of the Trea­ty on Euro­pean Uni­on (TEU), the fun­da­men­tal rights and free­doms enshri­ned in the Trea­ties and, pur­su­ant to Artic­le 6 TEU, the Char­ter. As a pre­re­qui­si­te, AI should be a human-cen­tric tech­no­lo­gy. It should ser­ve as a tool for peo­p­le, with the ulti­ma­te aim of incre­a­sing human well-being.

(7) In order to ensu­re a con­si­stent and high level of pro­tec­tion of public inte­rests as regards health, safe­ty and fun­da­men­tal rights, com­mon rules for high-risk AI systems should be estab­lished. Tho­se rules should be con­si­stent with the Char­ter, non-dis­cri­mi­na­to­ry and in line with the Union’s inter­na­tio­nal trade com­mit­ments. They should also take into account the Euro­pean Decla­ra­ti­on on Digi­tal Rights and Prin­ci­ples for the Digi­tal Deca­de and the Ethics gui­de­lines for trust­wor­t­hy AI of the High-Level Expert Group on Arti­fi­ci­al Intel­li­gence (AI HLEG). 

(8) A Uni­on legal frame­work lay­ing down har­mo­ni­s­ed rules on AI is the­r­e­fo­re nee­ded to foster the deve­lo­p­ment, use and upt­ake of AI in the inter­nal mar­ket that at the same time meets a high level of pro­tec­tion of public inte­rests, such as health and safe­ty and the pro­tec­tion of fun­da­men­tal rights, inclu­ding demo­cra­cy, the rule of law and envi­ron­men­tal pro­tec­tion as reco­g­nis­ed and pro­tec­ted by Uni­on law. To achie­ve that objec­ti­ve, rules regu­la­ting the pla­cing on the mar­ket, the put­ting into ser­vice and the use of cer­tain AI systems should be laid down, thus ensu­ring the smooth func­tio­ning of the inter­nal mar­ket and allo­wing tho­se systems to bene­fit from the prin­ci­ple of free move­ment of goods and ser­vices. Tho­se rules should be clear and robust in pro­tec­ting fun­da­men­tal rights, sup­port­i­ve of new inno­va­ti­ve solu­ti­ons, enab­ling a Euro­pean eco­sy­stem of public and pri­va­te actors crea­ting AI systems in line with Uni­on values and unlocking the poten­ti­al of the digi­tal trans­for­ma­ti­on across all regi­ons of the Uni­on. By lay­ing down tho­se rules as well as mea­su­res in sup­port of inno­va­ti­on with a par­ti­cu­lar focus on small and medi­um enter­pri­ses (SMEs), inclu­ding start­ups, this Regu­la­ti­on sup­ports the objec­ti­ve of pro­mo­ting the Euro­pean human-cen­tric approach to AI and being a glo­bal lea­der in the deve­lo­p­ment of secu­re, trust­wor­t­hy and ethi­cal AI as sta­ted by the Euro­pean Coun­cil , and it ensu­res the pro­tec­tion of ethi­cal prin­ci­ples, as spe­ci­fi­cal­ly reque­sted by the Euro­pean Parliament.

(144) In order to pro­mo­te and pro­tect inno­va­ti­on, the AI-on-demand plat­form, all rele­vant Uni­on fun­ding pro­gram­mes and pro­jects, such as Digi­tal Euro­pe Pro­gram­me, Hori­zon Euro­pe, imple­men­ted by the Com­mis­si­on and the Mem­ber Sta­tes at Uni­on or natio­nal level should, as appro­pria­te, con­tri­bu­te to the achie­ve­ment of the objec­ti­ves of this Regulation.

(145) In order to mini­mi­se the risks to imple­men­ta­ti­on resul­ting from lack of know­ledge and exper­ti­se in the mar­ket as well as to faci­li­ta­te com­pli­ance of pro­vi­ders, in par­ti­cu­lar SMEs, inclu­ding start-ups, and noti­fi­ed bodies with their obli­ga­ti­ons under this Regu­la­ti­on, the AI-on-demand plat­form, the Euro­pean Digi­tal Inno­va­ti­on Hubs and the test­ing and expe­ri­men­ta­ti­on faci­li­ties estab­lished by the Com­mis­si­on and the Mem­ber Sta­tes at Uni­on or natio­nal level should con­tri­bu­te to the imple­men­ta­ti­on of this Regu­la­ti­on. Within their respec­ti­ve mis­si­on and fields of com­pe­tence, the AI-on-demand plat­form, the Euro­pean Digi­tal Inno­va­ti­on Hubs and the test­ing and expe­ri­men­ta­ti­on Faci­li­ties are able to pro­vi­de in par­ti­cu­lar tech­ni­cal and sci­en­ti­fic sup­port to pro­vi­ders and noti­fi­ed bodies. 

(147) It is appro­pria­te that the Com­mis­si­on faci­li­ta­tes, to the ext­ent pos­si­ble, access to test­ing and expe­ri­men­ta­ti­on faci­li­ties to bodies, groups or labo­ra­to­ries estab­lished or accre­di­ted pur­su­ant to any rele­vant Uni­on har­mo­ni­sa­ti­on legis­la­ti­on and which ful­fil tasks in the con­text of con­for­mi­ty assess­ment of pro­ducts or devices cover­ed by that Uni­on har­mo­ni­sa­ti­on legis­la­ti­on. This is, in par­ti­cu­lar, the case as regards expert panels, expert labo­ra­to­ries and refe­rence labo­ra­to­ries in the field of medi­cal devices pur­su­ant to Regu­la­ti­ons (EU) 2017/745 and (EU) 2017/746.

(176) Sin­ce the objec­ti­ve of this Regu­la­ti­on, name­ly to impro­ve the func­tio­ning of the inter­nal mar­ket and to pro­mo­te the upt­ake of human cen­tric and trust­wor­t­hy AI, while ensu­ring a high level of pro­tec­tion of health, safe­ty, fun­da­men­tal rights enshri­ned in the Char­ter, inclu­ding demo­cra­cy, the rule of law and envi­ron­men­tal pro­tec­tion against harmful effects of AI systems in the Uni­on and sup­port­ing inno­va­ti­on, can­not be suf­fi­ci­ent­ly achie­ved by the Mem­ber Sta­tes and can rather, by rea­son of the sca­le or effects of the action, be bet­ter achie­ved at Uni­on level, the Uni­on may adopt mea­su­res in accordance with the prin­ci­ple of sub­si­dia­ri­ty as set out in Artic­le 5 TEU. In accordance with the prin­ci­ple of pro­por­tio­na­li­ty as set out in that Artic­le, this Regu­la­ti­on does not go bey­ond what is neces­sa­ry in order to achie­ve that objective.

(180) The Euro­pean Data Pro­tec­tion Super­vi­sor and the Euro­pean Data Pro­tec­tion Board were con­sul­ted in accordance with Artic­le 42(1) and (2) of Regu­la­ti­on (EU) 2018/1725 and deli­ver­ed their joint opi­ni­on on 18 June 2021,

Artic­le 1 Sub­ject matter

1. The pur­po­se of this Regu­la­ti­on is to impro­ve the func­tio­ning of the inter­nal mar­ket and pro­mo­te the upt­ake of human-cen­tric and trust­wor­t­hy arti­fi­ci­al intel­li­gence (AI), while ensu­ring a high level of pro­tec­tion of health, safe­ty, fun­da­men­tal rights enshri­ned in the Char­ter, inclu­ding demo­cra­cy, the rule of law and envi­ron­men­tal pro­tec­tion, against the harmful effects of AI systems in the Uni­on and sup­port­ing innovation.

2. This Regu­la­ti­on lays down:

(a) har­mo­ni­s­ed rules for the pla­cing on the mar­ket, the put­ting into ser­vice, and the use of AI systems in the Union;

(b) pro­hi­bi­ti­ons of cer­tain AI practices;

(c) spe­ci­fic requi­re­ments for high-risk AI systems and obli­ga­ti­ons for ope­ra­tors of such systems;

(d) har­mo­ni­s­ed trans­pa­ren­cy rules for cer­tain AI systems;

(e) har­mo­ni­s­ed rules for the pla­cing on the mar­ket of gene­ral-pur­po­se AI models;

(f) rules on mar­ket moni­to­ring, mar­ket sur­veil­lan­ce, gover­nan­ce and enforcement;

(g) mea­su­res to sup­port inno­va­ti­on, with a par­ti­cu­lar focus on SMEs, inclu­ding start-ups.

Artic­le 2 Scope

1. This Regu­la­ti­on applies to:

(a) pro­vi­ders pla­cing on the mar­ket or put­ting into ser­vice AI systems or pla­cing on the mar­ket gene­ral-pur­po­se AI models in the Uni­on, irre­spec­ti­ve of whe­ther tho­se pro­vi­ders are estab­lished or loca­ted within the Uni­on or in a third country;

(21) In order to ensu­re a level play­ing field and an effec­ti­ve pro­tec­tion of rights and free­doms of indi­vi­du­als across the Uni­on, the rules estab­lished by this Regu­la­ti­on should app­ly to pro­vi­ders of AI systems in a non-dis­cri­mi­na­to­ry man­ner, irre­spec­ti­ve of whe­ther they are estab­lished within the Uni­on or in a third coun­try, and to deployers of AI systems estab­lished within the Union.

(23) This Regu­la­ti­on should also app­ly to Uni­on insti­tu­ti­ons, bodies, offices and agen­ci­es when acting as a pro­vi­der or deployer of an AI system. 

(b) deployers of AI systems that have their place of estab­lish­ment or are loca­ted within the Union;

(c) pro­vi­ders and deployers of AI systems that have their place of estab­lish­ment or are loca­ted in a third coun­try, whe­re the out­put pro­du­ced by the AI system is used in the Union;

(d) importers and dis­tri­bu­tors of AI systems;

(e) pro­duct manu­fac­tu­r­ers pla­cing on the mar­ket or put­ting into ser­vice an AI system tog­e­ther with their pro­duct and under their own name or trademark;

(f) aut­ho­ri­sed repre­sen­ta­ti­ves of pro­vi­ders, which are not estab­lished in the Union;

(82) To enable enforce­ment of this Regu­la­ti­on and crea­te a level-play­ing field for ope­ra­tors, and, taking into account the dif­fe­rent forms of making available of digi­tal pro­ducts, it is important to ensu­re that, under all cir­cum­stances, a per­son estab­lished in the Uni­on can pro­vi­de aut­ho­ri­ties with all the neces­sa­ry infor­ma­ti­on on the com­pli­ance of an AI system. The­r­e­fo­re, pri­or to making their AI systems available in the Uni­on, pro­vi­ders estab­lished in third count­ries should, by writ­ten man­da­te, appoint an aut­ho­ri­sed repre­sen­ta­ti­ve estab­lished in the Uni­on. This aut­ho­ri­sed repre­sen­ta­ti­ve plays a pivo­tal role in ensu­ring the com­pli­ance of the high-risk AI systems pla­ced on the mar­ket or put into ser­vice in the Uni­on by tho­se pro­vi­ders who are not estab­lished in the Uni­on and in ser­ving as their cont­act per­son estab­lished in the Union.

(g) affec­ted per­sons that are loca­ted in the Union.

2. For AI systems clas­si­fi­ed as high-risk AI systems in accordance with Artic­le 6(1) rela­ted to pro­ducts cover­ed by the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Sec­tion B of Annex I, only Artic­le 6(1), Artic­les 102 to 109 and Artic­le 112 app­ly. Artic­le 57 applies only in so far as the requi­re­ments for high-risk AI systems under this Regu­la­ti­on have been inte­gra­ted in that Uni­on har­mo­ni­sa­ti­on legislation.

3. This Regu­la­ti­on does not app­ly to are­as out­side the scope of Uni­on law, and shall not, in any event, affect the com­pe­ten­ces of the Mem­ber Sta­tes con­cer­ning natio­nal secu­ri­ty, regard­less of the type of enti­ty ent­ru­sted by the Mem­ber Sta­tes with car­ry­ing out tasks in rela­ti­on to tho­se competences.

This Regu­la­ti­on does not app­ly to AI systems whe­re and in so far they are pla­ced on the mar­ket, put into ser­vice, or used with or wit­hout modi­fi­ca­ti­on exclu­si­ve­ly for mili­ta­ry, defence or natio­nal secu­ri­ty pur­po­ses, regard­less of the type of enti­ty car­ry­ing out tho­se activities.

This Regu­la­ti­on does not app­ly to AI systems which are not pla­ced on the mar­ket or put into ser­vice in the Uni­on, whe­re the out­put is used in the Uni­on exclu­si­ve­ly for mili­ta­ry, defence or natio­nal secu­ri­ty pur­po­ses, regard­less of the type of enti­ty car­ry­ing out tho­se activities.

(24) If, and inso­far as, AI systems are pla­ced on the mar­ket, put into ser­vice, or used with or wit­hout modi­fi­ca­ti­on of such systems for mili­ta­ry, defence or natio­nal secu­ri­ty pur­po­ses, tho­se should be exclu­ded from the scope of this Regu­la­ti­on regard­less of which type of enti­ty is car­ry­ing out tho­se acti­vi­ties, such as whe­ther it is a public or pri­va­te enti­ty. As regards mili­ta­ry and defence pur­po­ses, such exclu­si­on is justi­fi­ed both by Artic­le 4(2) TEU and by the spe­ci­fi­ci­ties of the Mem­ber Sta­tes’ and the com­mon Uni­on defence poli­cy cover­ed by Chap­ter 2 of Tit­le V TEU that are sub­ject to public inter­na­tio­nal law, which is the­r­e­fo­re the more appro­pria­te legal frame­work for the regu­la­ti­on of AI systems in the con­text of the use of lethal force and other AI systems in the con­text of mili­ta­ry and defence acti­vi­ties. As regards natio­nal secu­ri­ty pur­po­ses, the exclu­si­on is justi­fi­ed both by the fact that natio­nal secu­ri­ty remains the sole respon­si­bi­li­ty of Mem­ber Sta­tes in accordance with Artic­le 4(2) TEU and by the spe­ci­fic natu­re and ope­ra­tio­nal needs of natio­nal secu­ri­ty acti­vi­ties and spe­ci­fic natio­nal rules appli­ca­ble to tho­se acti­vi­ties. None­thel­ess, if an AI system deve­lo­ped, pla­ced on the mar­ket, put into ser­vice or used for mili­ta­ry, defence or natio­nal secu­ri­ty pur­po­ses is used out­side tho­se tem­po­r­a­ri­ly or per­ma­nent­ly for other pur­po­ses, for exam­p­le, civi­li­an or huma­ni­ta­ri­an pur­po­ses, law enforce­ment or public secu­ri­ty pur­po­ses, such a system would fall within the scope of this Regulation.

In that case, the enti­ty using the AI system for other than mili­ta­ry, defence or natio­nal secu­ri­ty pur­po­ses should ensu­re the com­pli­ance of the AI system with this Regu­la­ti­on, unless the system is alre­a­dy com­pli­ant with this Regu­la­ti­on. AI systems pla­ced on the mar­ket or put into ser­vice for an exclu­ded pur­po­se, name­ly mili­ta­ry, defence or natio­nal secu­ri­ty, and one or more non-exclu­ded pur­po­ses, such as civi­li­an pur­po­ses or law enforce­ment, fall within the scope of this Regu­la­ti­on and pro­vi­ders of tho­se systems should ensu­re com­pli­ance with this Regu­la­ti­on. In tho­se cases, the fact that an AI system may fall within the scope of this Regu­la­ti­on should not affect the pos­si­bi­li­ty of enti­ties car­ry­ing out natio­nal secu­ri­ty, defence and mili­ta­ry acti­vi­ties, regard­less of the type of enti­ty car­ry­ing out tho­se acti­vi­ties, to use AI systems for natio­nal secu­ri­ty, mili­ta­ry and defence pur­po­ses, the use of which is exclu­ded from the scope of this Regu­la­ti­on. An AI system pla­ced on the mar­ket for civi­li­an or law enforce­ment pur­po­ses which is used with or wit­hout modi­fi­ca­ti­on for mili­ta­ry, defence or natio­nal secu­ri­ty pur­po­ses should not fall within the scope of this Regu­la­ti­on, regard­less of the type of enti­ty car­ry­ing out tho­se activities. 

4. This Regu­la­ti­on applies neither to public aut­ho­ri­ties in a third coun­try nor to inter­na­tio­nal orga­ni­sa­ti­ons fal­ling within the scope of this Regu­la­ti­on pur­su­ant to para­graph 1, whe­re tho­se aut­ho­ri­ties or orga­ni­sa­ti­ons use AI systems in the frame­work of inter­na­tio­nal coope­ra­ti­on or agree­ments for law enforce­ment and judi­cial coope­ra­ti­on with the Uni­on or with one or more Mem­ber Sta­tes, pro­vi­ded that such a third coun­try or inter­na­tio­nal orga­ni­sa­ti­on pro­vi­des ade­qua­te safe­guards with respect to the pro­tec­tion of fun­da­men­tal rights and free­doms of individuals. 

(22) In light of their digi­tal natu­re, cer­tain AI systems should fall within the scope of this Regu­la­ti­on even when they are not pla­ced on the mar­ket, put into ser­vice, or used in the Uni­on. This is the case, for exam­p­le, whe­re an ope­ra­tor estab­lished in the Uni­on con­tracts cer­tain ser­vices to an ope­ra­tor estab­lished in a third coun­try in rela­ti­on to an acti­vi­ty to be per­for­med by an AI system that would qua­li­fy as high-risk . In tho­se cir­cum­stances, the AI system used in a third coun­try by the ope­ra­tor could pro­cess data lawful­ly coll­ec­ted in and trans­fer­red from the Uni­on, and pro­vi­de to the con­trac­ting ope­ra­tor in the Uni­on the out­put of that AI system resul­ting from that pro­ce­s­sing, wit­hout that AI system being pla­ced on the mar­ket, put into ser­vice or used in the Uni­on. To pre­vent the cir­cum­ven­ti­on of this Regu­la­ti­on and to ensu­re an effec­ti­ve pro­tec­tion of natu­ral per­sons loca­ted in the Uni­on, this Regu­la­ti­on should also app­ly to pro­vi­ders and deployers of AI systems that are estab­lished in a third coun­try, to the ext­ent the out­put pro­du­ced by tho­se systems is inten­ded to be used in the Union.

None­thel­ess, to take into account exi­sting arran­ge­ments and spe­cial needs for future coope­ra­ti­on with for­eign part­ners with whom infor­ma­ti­on and evi­dence is exch­an­ged, this Regu­la­ti­on should not app­ly to public aut­ho­ri­ties of a third coun­try and inter­na­tio­nal orga­ni­sa­ti­ons when acting in the frame­work of coope­ra­ti­on or inter­na­tio­nal agree­ments con­clu­ded at Uni­on or natio­nal level for law enforce­ment and judi­cial coope­ra­ti­on with the Uni­on or the Mem­ber Sta­tes, pro­vi­ded that the rele­vant third coun­try or inter­na­tio­nal orga­ni­sa­ti­on pro­vi­des ade­qua­te safe­guards with respect to the pro­tec­tion of fun­da­men­tal rights and free­doms of indi­vi­du­als. Whe­re rele­vant, this may cover acti­vi­ties of enti­ties ent­ru­sted by the third count­ries to car­ry out spe­ci­fic tasks in sup­port of such law enforce­ment and judi­cial coope­ra­ti­on. Such frame­work for coope­ra­ti­on or agree­ments have been estab­lished bila­te­ral­ly bet­ween Mem­ber Sta­tes and third count­ries or bet­ween the Euro­pean Uni­on, Euro­pol and other Uni­on agen­ci­es and third count­ries and inter­na­tio­nal orga­ni­sa­ti­ons. The aut­ho­ri­ties com­pe­tent for super­vi­si­on of the law enforce­ment and judi­cial aut­ho­ri­ties under this Regu­la­ti­on should assess whe­ther tho­se frame­works for coope­ra­ti­on or inter­na­tio­nal agree­ments include ade­qua­te safe­guards with respect to the pro­tec­tion of fun­da­men­tal rights and free­doms of indi­vi­du­als. Reci­pi­ent natio­nal aut­ho­ri­ties and Uni­on insti­tu­ti­ons, bodies, offices and agen­ci­es making use of such out­puts in the Uni­on remain accoun­ta­ble to ensu­re their use com­plies with Uni­on law. When tho­se inter­na­tio­nal agree­ments are revi­sed or new ones are con­clu­ded in the future, the con­trac­ting par­ties should make utmost efforts to ali­gn tho­se agree­ments with the requi­re­ments of this Regulation.

5. This Regu­la­ti­on shall not affect the appli­ca­ti­on of the pro­vi­si­ons on the lia­bi­li­ty of pro­vi­ders of inter­me­dia­ry ser­vices as set out in Chap­ter II of Regu­la­ti­on (EU) 2022/2065.

(11) This Regu­la­ti­on should be wit­hout pre­ju­di­ce to the pro­vi­si­ons regar­ding the lia­bi­li­ty of pro­vi­ders of inter­me­dia­ry ser­vices as set out in Regu­la­ti­on (EU) 2022/2065 of the Euro­pean Par­lia­ment and of the Council .

6. This Regu­la­ti­on does not app­ly to AI systems or AI models, inclu­ding their out­put, spe­ci­fi­cal­ly deve­lo­ped and put into ser­vice for the sole pur­po­se of sci­en­ti­fic rese­arch and development.

(25) This Regu­la­ti­on should sup­port inno­va­ti­on, should respect free­dom of sci­ence, and should not under­mi­ne rese­arch and deve­lo­p­ment acti­vi­ty. It is the­r­e­fo­re neces­sa­ry to exclude from its scope AI systems and models spe­ci­fi­cal­ly deve­lo­ped and put into ser­vice for the sole pur­po­se of sci­en­ti­fic rese­arch and deve­lo­p­ment. Moreo­ver, it is neces­sa­ry to ensu­re that this Regu­la­ti­on does not other­wi­se affect sci­en­ti­fic rese­arch and deve­lo­p­ment acti­vi­ty on AI systems or models pri­or to being pla­ced on the mar­ket or put into ser­vice. As regards pro­duct-ori­en­ted rese­arch, test­ing and deve­lo­p­ment acti­vi­ty regar­ding AI systems or models, the pro­vi­si­ons of this Regu­la­ti­on should also not app­ly pri­or to tho­se systems and models being put into ser­vice or pla­ced on the mar­ket. That exclu­si­on is wit­hout pre­ju­di­ce to the obli­ga­ti­on to com­ply with this Regu­la­ti­on whe­re an AI system fal­ling into the scope of this Regu­la­ti­on is pla­ced on the mar­ket or put into ser­vice as a result of such rese­arch and deve­lo­p­ment acti­vi­ty and to the appli­ca­ti­on of pro­vi­si­ons on AI regu­la­to­ry sand­bo­xes and test­ing in real world conditions.

Fur­ther­mo­re, wit­hout pre­ju­di­ce to the exclu­si­on of AI systems spe­ci­fi­cal­ly deve­lo­ped and put into ser­vice for the sole pur­po­se of sci­en­ti­fic rese­arch and deve­lo­p­ment, any other AI system that may be used for the con­duct of any rese­arch and deve­lo­p­ment acti­vi­ty should remain sub­ject to the pro­vi­si­ons of this Regu­la­ti­on. In any event, any rese­arch and deve­lo­p­ment acti­vi­ty should be car­ri­ed out in accordance with reco­g­nis­ed ethi­cal and pro­fes­sio­nal stan­dards for sci­en­ti­fic rese­arch and should be con­duc­ted in accordance with appli­ca­ble Uni­on law.

7. Uni­on law on the pro­tec­tion of per­so­nal data, pri­va­cy and the con­fi­den­tia­li­ty of com­mu­ni­ca­ti­ons applies to per­so­nal data pro­ce­s­sed in con­nec­tion with the rights and obli­ga­ti­ons laid down in this Regu­la­ti­on. This Regu­la­ti­on shall not affect Regu­la­ti­on (EU) 2016/679 or (EU) 2018/1725, or Direc­ti­ve 2002/58/EC or (EU) 2016/680, wit­hout pre­ju­di­ce to Artic­le 10(5) and Artic­le 59 of this Regulation.

(9) Har­mo­ni­s­ed rules appli­ca­ble to the pla­cing on the mar­ket, the put­ting into ser­vice and the use of high-risk AI systems should be laid down con­sist­ent­ly with Regu­la­ti­on (EC) No 765/2008 of the Euro­pean Par­lia­ment and of the Coun­cil , Decis­i­on No 768/2008/EC of the Euro­pean Par­lia­ment and of the Coun­cil and Regu­la­ti­on (EU) 2019/1020 of the Euro­pean Par­lia­ment and of the Coun­cil (New Legis­la­ti­ve Frame­work). The har­mo­ni­s­ed rules laid down in this Regu­la­ti­on should app­ly across sec­tors and, in line with the New Legis­la­ti­ve Frame­work, should be wit­hout pre­ju­di­ce to exi­sting Uni­on law, in par­ti­cu­lar on data pro­tec­tion, con­su­mer pro­tec­tion, fun­da­men­tal rights, employment, and pro­tec­tion of workers, and pro­duct safe­ty, to which this Regu­la­ti­on is com­ple­men­ta­ry. As a con­se­quence, all rights and reme­dies pro­vi­ded for by such Uni­on law to con­su­mers, and other per­sons on whom AI systems may have a nega­ti­ve impact, inclu­ding as regards the com­pen­sa­ti­on of pos­si­ble dama­ges pur­su­ant to Coun­cil Direc­ti­ve 85/374/EEC10 remain unaf­fec­ted and ful­ly applicable.

Fur­ther­mo­re, in the con­text of employment and pro­tec­tion of workers, this Regu­la­ti­on should the­r­e­fo­re not affect Uni­on law on social poli­cy and natio­nal labour law, in com­pli­ance with Uni­on law, con­cer­ning employment and working con­di­ti­ons, inclu­ding health and safe­ty at work and the rela­ti­on­ship bet­ween employers and workers. This Regu­la­ti­on should also not affect the exer­cise of fun­da­men­tal rights as reco­g­nis­ed in the Mem­ber Sta­tes and at Uni­on level, inclu­ding the right or free­dom to strike or to take other action cover­ed by the spe­ci­fic indu­stri­al rela­ti­ons systems in Mem­ber Sta­tes as well as the right to nego­tia­te, to con­clude and enforce coll­ec­ti­ve agree­ments or to take coll­ec­ti­ve action in accordance with natio­nal law.

This Regu­la­ti­on should not affect the pro­vi­si­ons aiming to impro­ve working con­di­ti­ons in plat­form work laid down in a Direc­ti­ve of the Euro­pean Par­lia­ment and of the Coun­cil on impro­ving working con­di­ti­ons in plat­form work. Moreo­ver, this Regu­la­ti­on aims to streng­then the effec­ti­ve­ness of such exi­sting rights and reme­dies by estab­li­shing spe­ci­fic requi­re­ments and obli­ga­ti­ons, inclu­ding in respect of the trans­pa­ren­cy, tech­ni­cal docu­men­ta­ti­on and record-kee­ping of AI systems. Fur­ther­mo­re, the obli­ga­ti­ons pla­ced on various ope­ra­tors invol­ved in the AI value chain under this Regu­la­ti­on should app­ly wit­hout pre­ju­di­ce to natio­nal law, in com­pli­ance with Uni­on law, having the effect of limi­ting the use of cer­tain AI systems whe­re such law falls out­side the scope of this Regu­la­ti­on or pur­sues legi­ti­ma­te public inte­rest objec­ti­ves other than tho­se pur­sued by this Regu­la­ti­on. For exam­p­le, natio­nal labour law and law on the pro­tec­tion of minors, name­ly per­sons below the age of 18, taking into account the UNCRC Gene­ral Com­ment No 25 (2021) on children’s rights in rela­ti­on to the digi­tal envi­ron­ment, inso­far as they are not spe­ci­fic to AI systems and pur­sue other legi­ti­ma­te public inte­rest objec­ti­ves, should not be affec­ted by this Regulation. 

(10) The fun­da­men­tal right to the pro­tec­tion of per­so­nal data is safe­guard­ed in par­ti­cu­lar by Regu­la­ti­ons (EU) 2016/679 and (EU) 2018/1725 of the Euro­pean Par­lia­ment and of the Coun­cil and Direc­ti­ve (EU) 2016/680 of the Euro­pean Par­lia­ment and of the Coun­cil . Direc­ti­ve 2002/58/EC of the Euro­pean Par­lia­ment and of the Coun­cil addi­tio­nal­ly pro­tects pri­va­te life and the con­fi­den­tia­li­ty of com­mu­ni­ca­ti­ons, inclu­ding by way of pro­vi­ding con­di­ti­ons for any sto­ring of per­so­nal and non-per­so­nal data in, and access from, ter­mi­nal equip­ment. Tho­se Uni­on legal acts pro­vi­de the basis for sus­tainable and respon­si­ble data pro­ce­s­sing, inclu­ding whe­re data sets include a mix of per­so­nal and non-per­so­nal data. This Regu­la­ti­on does not seek to affect the appli­ca­ti­on of exi­sting Uni­on law gover­ning the pro­ce­s­sing of per­so­nal data, inclu­ding the tasks and powers of the inde­pen­dent super­vi­so­ry aut­ho­ri­ties com­pe­tent to moni­tor com­pli­ance with tho­se instruments. 

It also does not affect the obli­ga­ti­ons of pro­vi­ders and deployers of AI systems in their role as data con­trol­lers or pro­ces­sors stem­ming from Uni­on or natio­nal law on the pro­tec­tion of per­so­nal data in so far as the design, the deve­lo­p­ment or the use of AI systems invol­ves the pro­ce­s­sing of per­so­nal data. It is also appro­pria­te to cla­ri­fy that data sub­jects con­ti­n­ue to enjoy all the rights and gua­ran­tees award­ed to them by such Uni­on law, inclu­ding the rights rela­ted to sole­ly auto­ma­ted indi­vi­du­al decis­i­on-making, inclu­ding pro­fil­ing. Har­mo­ni­s­ed rules for the pla­cing on the mar­ket, the put­ting into ser­vice and the use of AI systems estab­lished under this Regu­la­ti­on should faci­li­ta­te the effec­ti­ve imple­men­ta­ti­on and enable the exer­cise of the data sub­jects’ rights and other reme­dies gua­ran­teed under Uni­on law on the pro­tec­tion of per­so­nal data and of other fun­da­men­tal rights.

8. This Regu­la­ti­on does not app­ly to any rese­arch, test­ing or deve­lo­p­ment acti­vi­ty regar­ding AI systems or AI models pri­or to their being pla­ced on the mar­ket or put into ser­vice. Such acti­vi­ties shall be con­duc­ted in accordance with appli­ca­ble Uni­on law. Test­ing in real world con­di­ti­ons shall not be cover­ed by that exclusion. 

9. This Regu­la­ti­on is wit­hout pre­ju­di­ce to the rules laid down by other Uni­on legal acts rela­ted to con­su­mer pro­tec­tion and pro­duct safety.

10. This Regu­la­ti­on does not app­ly to obli­ga­ti­ons of deployers who are natu­ral per­sons using AI systems in the cour­se of a purely per­so­nal non-pro­fes­sio­nal activity.

11. This Regu­la­ti­on does not pre­clude the Uni­on or Mem­ber Sta­tes from main­tai­ning or intro­du­cing laws, regu­la­ti­ons or admi­ni­stra­ti­ve pro­vi­si­ons which are more favoura­ble to workers in terms of pro­tec­ting their rights in respect of the use of AI systems by employers, or from encou­ra­ging or allo­wing the appli­ca­ti­on of coll­ec­ti­ve agree­ments which are more favoura­ble to workers.

12. This Regu­la­ti­on does not app­ly to AI systems released under free and open-source licen­ces, unless they are pla­ced on the mar­ket or put into ser­vice as high-risk AI systems or as an AI system that falls under Artic­le 5 or 50. 

(103) Free and open-source AI com­pon­ents covers the soft­ware and data, inclu­ding models and gene­ral-pur­po­se AI models, tools, ser­vices or pro­ce­s­ses of an AI system. Free and open-source AI com­pon­ents can be pro­vi­ded through dif­fe­rent chan­nels, inclu­ding their deve­lo­p­ment on open repo­si­to­ries. For the pur­po­ses of this Regu­la­ti­on, AI com­pon­ents that are pro­vi­ded against a pri­ce or other­wi­se mone­ti­sed, inclu­ding through the pro­vi­si­on of tech­ni­cal sup­port or other ser­vices, inclu­ding through a soft­ware plat­form, rela­ted to the AI com­po­nent, or the use of per­so­nal data for rea­sons other than exclu­si­ve­ly for impro­ving the secu­ri­ty, com­pa­ti­bi­li­ty or inter­ope­ra­bi­li­ty of the soft­ware, with the excep­ti­on of tran­sac­tions bet­ween microen­ter­pri­ses, should not bene­fit from the excep­ti­ons pro­vi­ded to free and open-source AI com­pon­ents. The fact of making AI com­pon­ents available through open repo­si­to­ries should not, in its­elf, con­sti­tu­te a monetisation. 

(104)The pro­vi­ders of gene­ral-pur­po­se AI models that are released under a free and open-source licence, and who­se para­me­ters, inclu­ding the weights, the infor­ma­ti­on on the model archi­tec­tu­re, and the infor­ma­ti­on on model usa­ge, are made publicly available should be sub­ject to excep­ti­ons as regards the trans­pa­ren­cy-rela­ted requi­re­ments impo­sed on gene­ral-pur­po­se AI models, unless they can be con­side­red to pre­sent a syste­mic risk, in which case the cir­cum­stance that the model is trans­pa­rent and accom­pa­nied by an open-source licen­se should not be con­side­red to be a suf­fi­ci­ent rea­son to exclude com­pli­ance with the obli­ga­ti­ons under this Regu­la­ti­on. In any case, given that the release of gene­ral-pur­po­se AI models under free and open-source licence does not neces­s­a­ri­ly reve­al sub­stan­ti­al infor­ma­ti­on on the data set used for the trai­ning or fine-tuning of the model and on how com­pli­ance of copy­right law was ther­eby ensu­red, the excep­ti­on pro­vi­ded for gene­ral-pur­po­se AI models from com­pli­ance with the trans­pa­ren­cy-rela­ted requi­re­ments should not con­cern the obli­ga­ti­on to pro­du­ce a sum­ma­ry about the con­tent used for model trai­ning and the obli­ga­ti­on to put in place a poli­cy to com­ply with Uni­on copy­right law, in par­ti­cu­lar to iden­ti­fy and com­ply with the reser­va­ti­on of rights pur­su­ant to Artic­le 4(3) of Direc­ti­ve (EU) 2019/790 of the Euro­pean Par­lia­ment and of the Council .

(118) This Regu­la­ti­on regu­la­tes AI systems and AI models by impo­sing cer­tain requi­re­ments and obli­ga­ti­ons for rele­vant mar­ket actors that are pla­cing them on the mar­ket, put­ting into ser­vice or use in the Uni­on, ther­eby com­ple­men­ting obli­ga­ti­ons for pro­vi­ders of inter­me­dia­ry ser­vices that embed such systems or models into their ser­vices regu­la­ted by Regu­la­ti­on (EU) 2022/2065. To the ext­ent that such systems or models are embedded into desi­gna­ted very lar­ge online plat­forms or very lar­ge online search engi­nes, they are sub­ject to the risk-manage­ment frame­work pro­vi­ded for in Regu­la­ti­on (EU) 2022/2065. Con­se­quent­ly, the cor­re­spon­ding obli­ga­ti­ons of this Regu­la­ti­on should be pre­su­med to be ful­fil­led, unless signi­fi­cant syste­mic risks not cover­ed by Regu­la­ti­on (EU) 2022/2065 emer­ge and are iden­ti­fi­ed in such models. Within this frame­work, pro­vi­ders of very lar­ge online plat­forms and very lar­ge online search engi­nes are obli­ged to assess poten­ti­al syste­mic risks stem­ming from the design, func­tio­ning and use of their ser­vices, inclu­ding how the design of algo­rith­mic systems used in the ser­vice may con­tri­bu­te to such risks, as well as syste­mic risks stem­ming from poten­ti­al misu­s­es. Tho­se pro­vi­ders are also obli­ged to take appro­pria­te miti­ga­ting mea­su­res in obser­van­ce of fun­da­men­tal rights. 

(119) Con­side­ring the quick pace of inno­va­ti­on and the tech­no­lo­gi­cal evo­lu­ti­on of digi­tal ser­vices in scope of dif­fe­rent instru­ments of Uni­on law in par­ti­cu­lar having in mind the usa­ge and the per­cep­ti­on of their reci­pi­en­ts, the AI systems sub­ject to this Regu­la­ti­on may be pro­vi­ded as inter­me­dia­ry ser­vices or parts the­reof within the mea­ning of Regu­la­ti­on (EU) 2022/2065, which should be inter­pre­ted in a tech­no­lo­gy-neu­tral man­ner. For exam­p­le, AI systems may be used to pro­vi­de online search engi­nes, in par­ti­cu­lar, to the ext­ent that an AI system such as an online chat­bot per­forms sear­ches of, in prin­ci­ple, all web­sites, then incor­po­ra­tes the results into its exi­sting know­ledge and uses the updated know­ledge to gene­ra­te a sin­gle out­put that com­bi­nes dif­fe­rent sources of information.

(120) Fur­ther­mo­re, obli­ga­ti­ons pla­ced on pro­vi­ders and deployers of cer­tain AI systems in this Regu­la­ti­on to enable the detec­tion and dis­clo­sure that the out­puts of tho­se systems are arti­fi­ci­al­ly gene­ra­ted or mani­pu­la­ted are par­ti­cu­lar­ly rele­vant to faci­li­ta­te the effec­ti­ve imple­men­ta­ti­on of Regu­la­ti­on (EU) 2022/2065. This applies in par­ti­cu­lar as regards the obli­ga­ti­ons of pro­vi­ders of very lar­ge online plat­forms or very lar­ge online search engi­nes to iden­ti­fy and miti­ga­te syste­mic risks that may ari­se from the dis­se­mi­na­ti­on of con­tent that has been arti­fi­ci­al­ly gene­ra­ted or mani­pu­la­ted, in par­ti­cu­lar risk of the actu­al or fore­seeable nega­ti­ve effects on demo­cra­tic pro­ce­s­ses, civic dis­cour­se and elec­to­ral pro­ce­s­ses, inclu­ding through disinformation. 

(121)Stan­dar­di­sati­on should play a key role to pro­vi­de tech­ni­cal solu­ti­ons to pro­vi­ders to ensu­re com­pli­ance with this Regu­la­ti­on, in line with the sta­te of the art, to pro­mo­te inno­va­ti­on as well as com­pe­ti­ti­ve­ness and growth in the sin­gle mar­ket. Com­pli­ance with har­mo­ni­s­ed stan­dards as defi­ned in Artic­le 2, point (1)(c), of Regu­la­ti­on (EU) No 1025/2012 of the Euro­pean Par­lia­ment and of the Coun­cil , which are nor­mal­ly expec­ted to reflect the sta­te of the art, should be a means for pro­vi­ders to demon­stra­te con­for­mi­ty with the requi­re­ments of this Regu­la­ti­on. A balan­ced repre­sen­ta­ti­on of inte­rests invol­ving all rele­vant stake­hol­ders in the deve­lo­p­ment of stan­dards, in par­ti­cu­lar SMEs, con­su­mer orga­ni­sa­ti­ons and envi­ron­men­tal and social stake­hol­ders in accordance with Artic­les 5 and 6 of Regu­la­ti­on (EU) No 1025/2012 should the­r­e­fo­re be encou­ra­ged. In order to faci­li­ta­te com­pli­ance, the stan­dar­di­sati­on requests should be issued by the Com­mis­si­on wit­hout undue delay. When pre­pa­ring the stan­dar­di­sati­on request, the Com­mis­si­on should con­sult the advi­so­ry forum and the Board in order to coll­ect rele­vant exper­ti­se. Howe­ver, in the absence of rele­vant refe­ren­ces to har­mo­ni­s­ed stan­dards, the Com­mis­si­on should be able to estab­lish, via imple­men­ting acts, and after con­sul­ta­ti­on of the advi­so­ry forum, com­mon spe­ci­fi­ca­ti­ons for cer­tain requi­re­ments under this Regulation.

The com­mon spe­ci­fi­ca­ti­on should be an excep­tio­nal fall back solu­ti­on to faci­li­ta­te the provider’s obli­ga­ti­on to com­ply with the requi­re­ments of this Regu­la­ti­on, when the stan­dar­di­sati­on request has not been accept­ed by any of the Euro­pean stan­dar­di­sati­on orga­ni­sa­ti­ons, or when the rele­vant har­mo­ni­s­ed stan­dards insuf­fi­ci­ent­ly address fun­da­men­tal rights con­cerns, or when the har­mo­ni­s­ed stan­dards do not com­ply with the request, or when the­re are delays in the adop­ti­on of an appro­pria­te har­mo­ni­s­ed stan­dard. Whe­re such a delay in the adop­ti­on of a har­mo­ni­s­ed stan­dard is due to the tech­ni­cal com­ple­xi­ty of that stan­dard, this should be con­side­red by the Com­mis­si­on befo­re con­tem­pla­ting the estab­lish­ment of com­mon spe­ci­fi­ca­ti­ons. When deve­lo­ping com­mon spe­ci­fi­ca­ti­ons, the Com­mis­si­on is encou­ra­ged to coope­ra­te with inter­na­tio­nal part­ners and inter­na­tio­nal stan­dar­di­sati­on bodies. 

Artic­le 3 Definitions

For the pur­po­ses of this Regu­la­ti­on, the fol­lo­wing defi­ni­ti­ons apply:

(1)AI system’ means a machi­ne-based system that is desi­gned to ope­ra­te with vary­ing levels of auto­no­my and that may exhi­bit adap­ti­ve­ness after deployment, and that, for expli­cit or impli­cit objec­ti­ves, infers, from the input it recei­ves, how to gene­ra­te out­puts such as pre­dic­tions, con­tent, recom­men­da­ti­ons, or decis­i­ons that can influence phy­si­cal or vir­tu­al environments;

(12) The noti­on of ‘AI system’ in this Regu­la­ti­on should be cle­ar­ly defi­ned and should be clo­se­ly ali­gned with the work of inter­na­tio­nal orga­ni­sa­ti­ons working on AI to ensu­re legal cer­tain­ty, faci­li­ta­te inter­na­tio­nal con­ver­gence and wide accep­tance, while pro­vi­ding the fle­xi­bi­li­ty to accom­mo­da­te the rapid tech­no­lo­gi­cal deve­lo­p­ments in this field. Moreo­ver, the defi­ni­ti­on should be based on key cha­rac­te­ri­stics of AI systems that distin­gu­ish it from simp­ler tra­di­tio­nal soft­ware systems or pro­gramming approa­ches and should not cover systems that are based on the rules defi­ned sole­ly by natu­ral per­sons to auto­ma­ti­cal­ly exe­cu­te ope­ra­ti­ons. A key cha­rac­te­ri­stic of AI systems is their capa­bi­li­ty to infer. This capa­bi­li­ty to infer refers to the pro­cess of obtai­ning the out­puts, such as pre­dic­tions, con­tent, recom­men­da­ti­ons, or decis­i­ons, which can influence phy­si­cal and vir­tu­al envi­ron­ments, and to a capa­bi­li­ty of AI systems to deri­ve models or algo­rith­ms, or both, from inputs or data. The tech­ni­ques that enable infe­rence while buil­ding an AI system include machi­ne lear­ning approa­ches that learn from data how to achie­ve cer­tain objec­ti­ves, and logic- and know­ledge-based approa­ches that infer from encoded know­ledge or sym­bo­lic repre­sen­ta­ti­on of the task to be sol­ved. The capa­ci­ty of an AI system to infer tran­s­cends basic data pro­ce­s­sing by enab­ling lear­ning, rea­so­ning or model­ling. The term ‘machi­ne-based’ refers to the fact that AI systems run on machines.

The refe­rence to expli­cit or impli­cit objec­ti­ves unders­cores that AI systems can ope­ra­te accor­ding to expli­cit defi­ned objec­ti­ves or to impli­cit objec­ti­ves. The objec­ti­ves of the AI system may be dif­fe­rent from the inten­ded pur­po­se of the AI system in a spe­ci­fic con­text. For the pur­po­ses of this Regu­la­ti­on, envi­ron­ments should be under­s­tood to be the con­texts in which the AI systems ope­ra­te, whe­re­as out­puts gene­ra­ted by the AI system reflect dif­fe­rent func­tions per­for­med by AI systems and include pre­dic­tions, con­tent, recom­men­da­ti­ons or decis­i­ons. AI systems are desi­gned to ope­ra­te with vary­ing levels of auto­no­my, mea­ning that they have some degree of inde­pen­dence of actions from human invol­vement and of capa­bi­li­ties to ope­ra­te wit­hout human inter­ven­ti­on. The adap­ti­ve­ness that an AI system could exhi­bit after deployment, refers to self-lear­ning capa­bi­li­ties, allo­wing the system to chan­ge while in use. AI systems can be used on a stand-alo­ne basis or as a com­po­nent of a pro­duct, irre­spec­ti­ve of whe­ther the system is phy­si­cal­ly inte­gra­ted into the pro­duct (embedded) or ser­ves the func­tion­a­li­ty of the pro­duct wit­hout being inte­gra­ted the­r­ein (non-embedded).

(2)risk’ means the com­bi­na­ti­on of the pro­ba­bi­li­ty of an occur­rence of harm and the seve­ri­ty of that harm;

(3)pro­vi­der’ means a natu­ral or legal per­son, public aut­ho­ri­ty, agen­cy or other body that deve­lo­ps an AI system or a gene­ral-pur­po­se AI model or that has an AI system or a gene­ral-pur­po­se AI model deve­lo­ped and places it on the mar­ket or puts the AI system into ser­vice under its own name or trade­mark, whe­ther for payment or free of charge; 

(4)deployer’ means a natu­ral or legal per­son, public aut­ho­ri­ty, agen­cy or other body using an AI system under its aut­ho­ri­ty except whe­re the AI system is used in the cour­se of a per­so­nal non-pro­fes­sio­nal activity;

(13) The noti­on of ‘deployer’ refer­red to in this Regu­la­ti­on should be inter­pre­ted as any natu­ral or legal per­son, inclu­ding a public aut­ho­ri­ty, agen­cy or other body, using an AI system under its aut­ho­ri­ty, except whe­re the AI system is used in the cour­se of a per­so­nal non-pro­fes­sio­nal acti­vi­ty. Depen­ding on the type of AI system, the use of the system may affect per­sons other than the deployer.

(5)aut­ho­ri­sed repre­sen­ta­ti­ve’ means a natu­ral or legal per­son loca­ted or estab­lished in the Uni­on who has recei­ved and accept­ed a writ­ten man­da­te from a pro­vi­der of an AI system or a gene­ral-pur­po­se AI model to, respec­tively, per­form and car­ry out on its behalf the obli­ga­ti­ons and pro­ce­du­res estab­lished by this Regulation;

(6)importer’ means a natu­ral or legal per­son loca­ted or estab­lished in the Uni­on that places on the mar­ket an AI system that bears the name or trade­mark of a natu­ral or legal per­son estab­lished in a third country;

(7)dis­tri­bu­tor’ means a natu­ral or legal per­son in the sup­p­ly chain, other than the pro­vi­der or the importer, that makes an AI system available on the Uni­on market;

(8)ope­ra­tor’ means a pro­vi­der, pro­duct manu­fac­tu­rer, deployer, aut­ho­ri­sed repre­sen­ta­ti­ve, importer or distributor;

(9)pla­cing on the mar­ket’ means the first making available of an AI system or a gene­ral-pur­po­se AI model on the Uni­on market;

(10)making available on the mar­ket’ means the sup­p­ly of an AI system or a gene­ral-pur­po­se AI model for dis­tri­bu­ti­on or use on the Uni­on mar­ket in the cour­se of a com­mer­cial acti­vi­ty, whe­ther in return for payment or free of charge;

(11)put­ting into ser­vice’ means the sup­p­ly of an AI system for first use direct­ly to the deployer or for own use in the Uni­on for its inten­ded purpose;

(12)inten­ded pur­po­se’ means the use for which an AI system is inten­ded by the pro­vi­der, inclu­ding the spe­ci­fic con­text and con­di­ti­ons of use, as spe­ci­fi­ed in the infor­ma­ti­on sup­plied by the pro­vi­der in the ins­truc­tions for use, pro­mo­tio­nal or sales mate­ri­als and state­ments, as well as in the tech­ni­cal documentation;

(13)rea­son­ab­ly fore­seeable misu­se’ means the use of an AI system in a way that is not in accordance with its inten­ded pur­po­se, but which may result from rea­son­ab­ly fore­seeable human beha­viour or inter­ac­tion with other systems, inclu­ding other AI systems;

(14)safe­ty com­po­nent’ means a com­po­nent of a pro­duct or of an AI system which ful­fils a safe­ty func­tion for that pro­duct or AI system, or the fail­ure or mal­func­tio­ning of which end­an­gers the health and safe­ty of per­sons or property;

(15)ins­truc­tions for use’ means the infor­ma­ti­on pro­vi­ded by the pro­vi­der to inform the deployer of, in par­ti­cu­lar, an AI system’s inten­ded pur­po­se and pro­per use ;

(16)recall of an AI system’ means any mea­su­re aiming to achie­ve the return to the pro­vi­der or taking out of ser­vice or dis­ab­ling the use of an AI system made available to deployers;

(17)with­dra­wal of an AI system’ means any mea­su­re aiming to pre­vent an AI system in the sup­p­ly chain being made available on the market;

(18)per­for­mance of an AI system’ means the abili­ty of an AI system to achie­ve its inten­ded purpose;

(19)noti­fy­ing aut­ho­ri­ty’ means the natio­nal aut­ho­ri­ty respon­si­ble for set­ting up and car­ry­ing out the neces­sa­ry pro­ce­du­res for the assess­ment, desi­gna­ti­on and noti­fi­ca­ti­on of con­for­mi­ty assess­ment bodies and for their monitoring;

(20)con­for­mi­ty assess­ment’ means the pro­cess of demon­st­ra­ting whe­ther the requi­re­ments set out in Chap­ter III, Sec­tion 2 rela­ting to a high-risk AI system have been fulfilled; 

(21)con­for­mi­ty assess­ment body’ means a body that per­forms third-par­ty con­for­mi­ty assess­ment acti­vi­ties, inclu­ding test­ing, cer­ti­fi­ca­ti­on and inspection;

(22)noti­fi­ed body’ means a con­for­mi­ty assess­ment body noti­fi­ed in accordance with this Regu­la­ti­on and other rele­vant Uni­on har­mo­ni­sa­ti­on legislation;

(23)sub­stan­ti­al modi­fi­ca­ti­on’ means a chan­ge to an AI system after its pla­cing on the mar­ket or put­ting into ser­vice which is not fore­seen or plan­ned in the initi­al con­for­mi­ty assess­ment car­ri­ed out by the pro­vi­der and as a result of which the com­pli­ance of the AI system with the requi­re­ments set out in Chap­ter III, Sec­tion 2 is affec­ted or results in a modi­fi­ca­ti­on to the inten­ded pur­po­se for which the AI system has been assessed;

(24)CE mar­king’ means a mar­king by which a pro­vi­der indi­ca­tes that an AI system is in con­for­mi­ty with the requi­re­ments set out in Chap­ter III, Sec­tion 2 and other appli­ca­ble Uni­on har­mo­ni­sa­ti­on legis­la­ti­on pro­vi­ding for its affixing;

(25)post-mar­ket moni­to­ring system’ means all acti­vi­ties car­ri­ed out by pro­vi­ders of AI systems to coll­ect and review expe­ri­ence gai­ned from the use of AI systems they place on the mar­ket or put into ser­vice for the pur­po­se of iden­ti­fy­ing any need to imme­dia­te­ly app­ly any neces­sa­ry cor­rec­ti­ve or pre­ven­ti­ve actions;

(26)mar­ket sur­veil­lan­ce aut­ho­ri­ty’ means the natio­nal aut­ho­ri­ty car­ry­ing out the acti­vi­ties and taking the mea­su­res pur­su­ant to Regu­la­ti­on (EU) 2019/1020;

(27)har­mo­ni­s­ed stan­dard’ means a har­mo­ni­s­ed stan­dard as defi­ned in Artic­le 2(1), point (c), of Regu­la­ti­on (EU) No 1025/2012;

(28)com­mon spe­ci­fi­ca­ti­on’ means a set of tech­ni­cal spe­ci­fi­ca­ti­ons as defi­ned in Artic­le 2, point (4) of Regu­la­ti­on (EU) No 1025/2012, pro­vi­ding means to com­ply with cer­tain requi­re­ments estab­lished under this Regulation;

(29)trai­ning data’ means data used for trai­ning an AI system through fit­ting its lear­nable parameters ;

(30)vali­da­ti­on data’ means data used for pro­vi­ding an eva­lua­ti­on of the trai­ned AI system and for tuning its non-lear­nable para­me­ters and its lear­ning pro­cess in order, inter alia, to pre­vent under­fit­ting or overfitting;

(31)vali­da­ti­on data set’ means a sepa­ra­te data set or part of the trai­ning data set, eit­her as a fixed or varia­ble split;

(32)test­ing data’ means data used for pro­vi­ding an inde­pen­dent eva­lua­ti­on of the AI system in order to con­firm the expec­ted per­for­mance of that system befo­re its pla­cing on the mar­ket or put­ting into service;

(33)input data’ means data pro­vi­ded to or direct­ly acqui­red by an AI system on the basis of which the system pro­du­ces an output;

(34)bio­me­tric data’ means per­so­nal data resul­ting from spe­ci­fic tech­ni­cal pro­ce­s­sing rela­ting to the phy­si­cal, phy­sio­lo­gi­cal or beha­viou­ral cha­rac­te­ri­stics of a natu­ral per­son, such as facial images or dac­ty­lo­s­co­pic data;

(14) The noti­on of ‘bio­me­tric data’ used in this Regu­la­ti­on should be inter­pre­ted in light of the noti­on of bio­me­tric data as defi­ned in Artic­le 4, point (14) of Regu­la­ti­on (EU) 2016/679, Artic­le 3, point (18) of Regu­la­ti­on (EU) 2018/1725 and Artic­le 3, point (13) of Direc­ti­ve (EU) 2016/680. Bio­me­tric data can allow for the authen­ti­ca­ti­on, iden­ti­fi­ca­ti­on or cate­go­ri­sa­ti­on of natu­ral per­sons and for the reco­gni­ti­on of emo­ti­ons of natu­ral persons.

(35)bio­me­tric iden­ti­fi­ca­ti­on’ means the auto­ma­ted reco­gni­ti­on of phy­si­cal, phy­sio­lo­gi­cal, beha­viou­ral, or psy­cho­lo­gi­cal human fea­tures for the pur­po­se of estab­li­shing the iden­ti­ty of a natu­ral per­son by com­pa­ring bio­me­tric data of that indi­vi­du­al to bio­me­tric data of indi­vi­du­als stored in a database;

(15) The noti­on of ‘bio­me­tric iden­ti­fi­ca­ti­on’ refer­red to in this Regu­la­ti­on should be defi­ned as the auto­ma­ted reco­gni­ti­on of phy­si­cal, phy­sio­lo­gi­cal and beha­viou­ral human fea­tures such as the face, eye move­ment, body shape, voice, pro­so­dy, gait, postu­re, heart rate, blood pres­su­re, odour, keystrokes cha­rac­te­ri­stics, for the pur­po­se of estab­li­shing an individual’s iden­ti­ty by com­pa­ring bio­me­tric data of that indi­vi­du­al to stored bio­me­tric data of indi­vi­du­als in a refe­rence data­ba­se, irre­spec­ti­ve of whe­ther the indi­vi­du­al has given its con­sent or not. This exclu­des AI systems inten­ded to be used for bio­me­tric veri­fi­ca­ti­on, which inclu­des authen­ti­ca­ti­on, who­se sole pur­po­se is to con­firm that a spe­ci­fic natu­ral per­son is the per­son he or she claims to be and to con­firm the iden­ti­ty of a natu­ral per­son for the sole pur­po­se of having access to a ser­vice, unlocking a device or having secu­ri­ty access to premises.

(36)bio­me­tric veri­fi­ca­ti­on’ means the auto­ma­ted, one-to-one veri­fi­ca­ti­on, inclu­ding authen­ti­ca­ti­on, of the iden­ti­ty of natu­ral per­sons by com­pa­ring their bio­me­tric data to pre­vious­ly pro­vi­ded bio­me­tric data ;

(16) The noti­on of ‘bio­me­tric cate­go­ri­sa­ti­on’ refer­red to in this Regu­la­ti­on should be defi­ned as assig­ning natu­ral per­sons to spe­ci­fic cate­go­ries on the basis of their bio­me­tric data. Such spe­ci­fic cate­go­ries can rela­te to aspects such as sex, age, hair colour, eye colour, tat­toos, beha­viou­ral or per­so­na­li­ty traits, lan­guage, reli­gi­on, mem­ber­ship of a natio­nal mino­ri­ty, sexu­al or poli­ti­cal ori­en­ta­ti­on. This does not include bio­me­tric cate­go­ri­sa­ti­on systems that are a purely ancil­la­ry fea­ture intrin­si­cal­ly lin­ked to ano­ther com­mer­cial ser­vice, mea­ning that the fea­ture can­not, for objec­ti­ve tech­ni­cal rea­sons, be used wit­hout the prin­ci­pal ser­vice, and the inte­gra­ti­on of that fea­ture or func­tion­a­li­ty is not a means to cir­cum­vent the appli­ca­bi­li­ty of the rules of this Regu­la­ti­on. For exam­p­le, fil­ters cate­go­ri­sing facial or body fea­tures used on online mar­ket­places could con­sti­tu­te such an ancil­la­ry fea­ture as they can be used only in rela­ti­on to the prin­ci­pal ser­vice which con­sists in sel­ling a pro­duct by allo­wing the con­su­mer to pre­view the dis­play of the pro­duct on him or hers­elf and help the con­su­mer to make a purcha­se decis­i­on. Fil­ters used on online social net­work ser­vices which cate­go­ri­se facial or body fea­tures to allow users to add or modi­fy pic­tures or vide­os could also be con­side­red to be ancil­la­ry fea­ture as such fil­ter can­not be used wit­hout the prin­ci­pal ser­vice of the social net­work ser­vices con­si­sting in the sha­ring of con­tent online.

(37)spe­cial cate­go­ries of per­so­nal data’ means the cate­go­ries of per­so­nal data refer­red to in Artic­le 9(1) of Regu­la­ti­on (EU) 2016/679, Artic­le 10 of Direc­ti­ve (EU) 2016/680 and Artic­le 10(1) of Regu­la­ti­on (EU) 2018/1725;

(38)sen­si­ti­ve ope­ra­tio­nal data’ means ope­ra­tio­nal data rela­ted to acti­vi­ties of pre­ven­ti­on, detec­tion, inve­sti­ga­ti­on or pro­se­cu­ti­on of cri­mi­nal offen­ces, the dis­clo­sure of which could jeo­par­di­se the inte­gri­ty of cri­mi­nal proceedings;

(39)emo­ti­on reco­gni­ti­on system’ means an AI system for the pur­po­se of iden­ti­fy­ing or infer­ring emo­ti­ons or inten­ti­ons of natu­ral per­sons on the basis of their bio­me­tric data;

(18) The noti­on of ‘emo­ti­on reco­gni­ti­on system’ refer­red to in this Regu­la­ti­on should be defi­ned as an AI system for the pur­po­se of iden­ti­fy­ing or infer­ring emo­ti­ons or inten­ti­ons of natu­ral per­sons on the basis of their bio­me­tric data. The noti­on refers to emo­ti­ons or inten­ti­ons such as hap­pi­ness, sad­ness, anger, sur­pri­se, dis­gust, embar­rass­ment, exci­te­ment, shame, con­tempt, satis­fac­tion and amu­se­ment. It does not include phy­si­cal sta­tes, such as pain or fati­gue, inclu­ding, for exam­p­le, systems used in detec­ting the sta­te of fati­gue of pro­fes­sio­nal pilots or dri­vers for the pur­po­se of pre­ven­ting acci­dents. This does also not include the mere detec­tion of rea­di­ly appa­rent expres­si­ons, ges­tu­res or move­ments, unless they are used for iden­ti­fy­ing or infer­ring emo­ti­ons. Tho­se expres­si­ons can be basic facial expres­si­ons, such as a frown or a smi­le, or ges­tu­res such as the move­ment of hands, arms or head, or cha­rac­te­ri­stics of a person’s voice, such as a rai­sed voice or whispering. 

(40)bio­me­tric cate­go­ri­sa­ti­on system’ means an AI system for the pur­po­se of assig­ning natu­ral per­sons to spe­ci­fic cate­go­ries on the basis of their bio­me­tric data, unless it is ancil­la­ry to ano­ther com­mer­cial ser­vice and strict­ly neces­sa­ry for objec­ti­ve tech­ni­cal reasons;

(41)remo­te bio­me­tric iden­ti­fi­ca­ti­on system’ means an AI system for the pur­po­se of iden­ti­fy­ing natu­ral per­sons, wit­hout their acti­ve invol­vement, typi­cal­ly at a distance through the com­pa­ri­son of a person’s bio­me­tric data with the bio­me­tric data con­tai­ned in a refe­rence database ;

(17) The noti­on of ‘remo­te bio­me­tric iden­ti­fi­ca­ti­on system’ refer­red to in this Regu­la­ti­on should be defi­ned func­tion­al­ly, as an AI system inten­ded for the iden­ti­fi­ca­ti­on of natu­ral per­sons wit­hout their acti­ve invol­vement, typi­cal­ly at a distance, through the com­pa­ri­son of a person’s bio­me­tric data with the bio­me­tric data con­tai­ned in a refe­rence data­ba­se, irre­spec­tively of the par­ti­cu­lar tech­no­lo­gy, pro­ce­s­ses or types of bio­me­tric data used. Such remo­te bio­me­tric iden­ti­fi­ca­ti­on systems are typi­cal­ly used to per­cei­ve mul­ti­ple per­sons or their beha­viour simul­ta­neous­ly in order to faci­li­ta­te signi­fi­cant­ly the iden­ti­fi­ca­ti­on of natu­ral per­sons wit­hout their acti­ve invol­vement. This exclu­des AI systems inten­ded to be used for bio­me­tric veri­fi­ca­ti­on, which inclu­des authen­ti­ca­ti­on, the sole pur­po­se of which is to con­firm that a spe­ci­fic natu­ral per­son is the per­son he or she claims to be and to con­firm the iden­ti­ty of a natu­ral per­son for the sole pur­po­se of having access to a ser­vice, unlocking a device or having secu­ri­ty access to premises.

That exclu­si­on is justi­fi­ed by the fact that such systems are likely to have a minor impact on fun­da­men­tal rights of natu­ral per­sons com­pared to the remo­te bio­me­tric iden­ti­fi­ca­ti­on systems which may be used for the pro­ce­s­sing of the bio­me­tric data of a lar­ge num­ber of per­sons wit­hout their acti­ve invol­vement. In the case of ‘real-time’ systems, the cap­tu­ring of the bio­me­tric data, the com­pa­ri­son and the iden­ti­fi­ca­ti­on occur all instanta­neous­ly, near-instanta­neous­ly or in any event wit­hout a signi­fi­cant delay. In this regard, the­re should be no scope for cir­cum­ven­ting the rules of this Regu­la­ti­on on the ‘real¬time’ use of the AI systems con­cer­ned by pro­vi­ding for minor delays. ‘Real-time’ systems invol­ve the use of ‘live’ or ‘near-live’ mate­ri­al, such as video foota­ge, gene­ra­ted by a came­ra or other device with simi­lar func­tion­a­li­ty. In the case of ‘post’ systems, in con­trast, the bio­me­tric data has alre­a­dy been cap­tu­red and the com­pa­ri­son and iden­ti­fi­ca­ti­on occur only after a signi­fi­cant delay. This invol­ves mate­ri­al, such as pic­tures or video foota­ge gene­ra­ted by clo­sed cir­cuit tele­vi­si­on came­ras or pri­va­te devices, which has been gene­ra­ted befo­re the use of the system in respect of the natu­ral per­sons concerned.

(42)real-time remo­te bio­me­tric iden­ti­fi­ca­ti­on system’ means a remo­te bio­me­tric iden­ti­fi­ca­ti­on system, wher­eby the cap­tu­ring of bio­me­tric data, the com­pa­ri­son and the iden­ti­fi­ca­ti­on all occur wit­hout a signi­fi­cant delay, com­pri­sing not only instant iden­ti­fi­ca­ti­on, but also limi­t­ed short delays in order to avo­id circumvention;

(43)post remo­te bio­me­tric iden­ti­fi­ca­ti­on system’ means a remo­te bio­me­tric iden­ti­fi­ca­ti­on system other than a real-time remo­te bio­me­tric iden­ti­fi­ca­ti­on system;

(44)publicly acce­s­si­ble space’ means any publicly or pri­va­te­ly owned phy­si­cal place acce­s­si­ble to an unde­ter­mi­ned num­ber of natu­ral per­sons, regard­less of whe­ther cer­tain con­di­ti­ons for access may app­ly, and regard­less of the poten­ti­al capa­ci­ty restrictions;

(19) For the pur­po­ses of this Regu­la­ti­on the noti­on of ‘publicly acce­s­si­ble space’ should be under­s­tood as refer­ring to any phy­si­cal space that is acce­s­si­ble to an unde­ter­mi­ned num­ber of natu­ral per­sons, and irre­spec­ti­ve of whe­ther the space in que­sti­on is pri­va­te­ly or publicly owned, irre­spec­ti­ve of the acti­vi­ty for which the space may be used, such as for com­mer­ce, for exam­p­le, shops, restau­rants, cafés; for ser­vices, for exam­p­le, banks, pro­fes­sio­nal acti­vi­ties, hos­pi­ta­li­ty; for sport, for exam­p­le, swim­ming pools, gyms, sta­di­ums; for trans­port, for exam­p­le, bus, metro and rail­way sta­ti­ons, air­ports, means of trans­port; for enter­tain­ment, for exam­p­le, cine­mas, thea­tres, muse­ums, con­cert and con­fe­rence halls; or for lei­su­re or other­wi­se, for exam­p­le, public roads and squa­res, parks, forests, play­grounds. A space should also be clas­si­fi­ed as being publicly acce­s­si­ble if, regard­less of poten­ti­al capa­ci­ty or secu­ri­ty rest­ric­tions, access is sub­ject to cer­tain pre­de­ter­mi­ned con­di­ti­ons which can be ful­fil­led by an unde­ter­mi­ned num­ber of per­sons, such as the purcha­se of a ticket or tit­le of trans­port, pri­or regi­stra­ti­on or having a cer­tain age. In con­trast, a space should not be con­side­red to be publicly acce­s­si­ble if access is limi­t­ed to spe­ci­fic and defi­ned natu­ral per­sons through eit­her Uni­on or natio­nal law direct­ly rela­ted to public safe­ty or secu­ri­ty or through the clear mani­fe­sta­ti­on of will by the per­son having the rele­vant aut­ho­ri­ty over the space. The fac­tu­al pos­si­bi­li­ty of access alo­ne, such as an unlocked door or an open gate in a fence, does not imply that the space is publicly acce­s­si­ble in the pre­sence of indi­ca­ti­ons or cir­cum­stances sug­ge­st­ing the con­tra­ry, such as. signs pro­hi­bi­ting or rest­ric­ting access. Com­pa­ny and fac­to­ry pre­mi­ses, as well as offices and work­places that are inten­ded to be acce­s­sed only by rele­vant employees and ser­vice pro­vi­ders, are spaces that are not publicly acce­s­si­ble. Publicly acce­s­si­ble spaces should not include pri­sons or bor­der con­trol. Some other spaces may com­pri­se both publicly acce­s­si­ble and non-publicly acce­s­si­ble spaces, such as the hall­way of a pri­va­te resi­den­ti­al buil­ding neces­sa­ry to access a doctor’s office or an air­port. Online spaces are not cover­ed, as they are not phy­si­cal spaces. Whe­ther a given space is acce­s­si­ble to the public should howe­ver be deter­mi­ned on a case-by-case basis, having regard to the spe­ci­fi­ci­ties of the indi­vi­du­al situa­ti­on at hand.

(45)law enforce­ment aut­ho­ri­ty’ means:

(a) any public aut­ho­ri­ty com­pe­tent for the pre­ven­ti­on, inve­sti­ga­ti­on, detec­tion or pro­se­cu­ti­on of cri­mi­nal offen­ces or the exe­cu­ti­on of cri­mi­nal pen­al­ties, inclu­ding the safe­guar­ding against and the pre­ven­ti­on of thre­ats to public secu­ri­ty; or

(b) any other body or enti­ty ent­ru­sted by Mem­ber Sta­te law to exer­cise public aut­ho­ri­ty and public powers for the pur­po­ses of the pre­ven­ti­on, inve­sti­ga­ti­on, detec­tion or pro­se­cu­ti­on of cri­mi­nal offen­ces or the exe­cu­ti­on of cri­mi­nal pen­al­ties, inclu­ding the safe­guar­ding against and the pre­ven­ti­on of thre­ats to public security;

(46)law enforce­ment’ means acti­vi­ties car­ri­ed out by law enforce­ment aut­ho­ri­ties or on their behalf for the pre­ven­ti­on, inve­sti­ga­ti­on, detec­tion or pro­se­cu­ti­on of cri­mi­nal offen­ces or the exe­cu­ti­on of cri­mi­nal pen­al­ties, inclu­ding safe­guar­ding against and pre­ven­ting thre­ats to public security;

(47)AI Office’ means the Commission’s func­tion of con­tri­bu­ting to the imple­men­ta­ti­on, moni­to­ring and super­vi­si­on of AI systems and gene­ral-pur­po­se AI models, and AI gover­nan­ce, pro­vi­ded for in Com­mis­si­on Decis­i­on of 24 Janu­ary 2024; refe­ren­ces in this Regu­la­ti­on to the AI Office shall be con­strued as refe­ren­ces to the Commission;

(48)natio­nal com­pe­tent aut­ho­ri­ty’ means a noti­fy­ing aut­ho­ri­ty or a mar­ket sur­veil­lan­ce aut­ho­ri­ty; as regards AI systems put into ser­vice or used by Uni­on insti­tu­ti­ons, agen­ci­es, offices and bodies, refe­ren­ces to natio­nal com­pe­tent aut­ho­ri­ties or mar­ket sur­veil­lan­ce aut­ho­ri­ties in this Regu­la­ti­on shall be con­strued as refe­ren­ces to the Euro­pean Data Pro­tec­tion Supervisor;

(49)serious inci­dent’ means an inci­dent or mal­func­tio­ning of an AI system that direct­ly or indi­rect­ly leads to any of the following:

(a) the death of a per­son, or serious harm to a person’s health;

(b) a serious and irrever­si­ble dis­rup­ti­on of the manage­ment or ope­ra­ti­on of cri­ti­cal infrastructure.

(c) the inf­rin­ge­ment of obli­ga­ti­ons under Uni­on law inten­ded to pro­tect fun­da­men­tal rights;

(d) serious harm to pro­per­ty or the environment;

(50)per­so­nal data’ means per­so­nal data as defi­ned in Artic­le 4, point (1), of Regu­la­ti­on (EU) 2016/679;

(51)non-per­so­nal data’ means data other than per­so­nal data as defi­ned in Artic­le 4, point (1), of Regu­la­ti­on (EU) 2016/679;

(52)pro­fil­ing’ means pro­fil­ing as defi­ned in Artic­le 4, point (4), of Regu­la­ti­on (EU) 2016/679;

(53)real-world test­ing plan’ means a docu­ment that descri­bes the objec­ti­ves, metho­do­lo­gy, geo­gra­phi­cal, popu­la­ti­on and tem­po­ral scope, moni­to­ring, orga­ni­sa­ti­on and con­duct of test­ing in real-world conditions;

(54)sand­box plan’ means a docu­ment agreed bet­ween the par­ti­ci­pa­ting pro­vi­der and the com­pe­tent aut­ho­ri­ty describ­ing the objec­ti­ves, con­di­ti­ons, time­frame, metho­do­lo­gy and requi­re­ments for the acti­vi­ties car­ri­ed out within the sandbox;

(55)AI regu­la­to­ry sand­box’ means a con­trol­led frame­work set up by a com­pe­tent aut­ho­ri­ty which offers pro­vi­ders or pro­s­pec­ti­ve pro­vi­ders of AI systems the pos­si­bi­li­ty to deve­lop, train, vali­da­te and test, whe­re appro­pria­te in real-world con­di­ti­ons, an inno­va­ti­ve AI system, pur­su­ant to a sand­box plan for a limi­t­ed time under regu­la­to­ry supervision; 

(56)AI liter­a­cy’ means skills, know­ledge and under­stan­ding that allow pro­vi­ders, deployers and affec­ted per­sons, taking into account their respec­ti­ve rights and obli­ga­ti­ons in the con­text of this Regu­la­ti­on, to make an infor­med deployment of AI systems, as well as to gain awa­re­ness about the oppor­tu­ni­ties and risks of AI and pos­si­ble harm it can cause;

(20) In order to obtain the grea­test bene­fits from AI systems while pro­tec­ting fun­da­men­tal rights, health and safe­ty and to enable demo­cra­tic con­trol, AI liter­a­cy should equip pro­vi­ders, deployers and affec­ted per­sons with the neces­sa­ry noti­ons to make infor­med decis­i­ons regar­ding AI systems. Tho­se noti­ons may vary with regard to the rele­vant con­text and can include under­stan­ding the cor­rect appli­ca­ti­on of tech­ni­cal ele­ments during the AI system’s deve­lo­p­ment pha­se, the mea­su­res to be applied during its use, the sui­ta­ble ways in which to inter­pret the AI system’s out­put, and, in the case of affec­ted per­sons, the know­ledge neces­sa­ry to under­stand how decis­i­ons taken with the assi­stance of AI will have an impact on them. In the con­text of the appli­ca­ti­on this Regu­la­ti­on, AI liter­a­cy should pro­vi­de all rele­vant actors in the AI value chain with the insights requi­red to ensu­re the appro­pria­te com­pli­ance and its cor­rect enforcement.

Fur­ther­mo­re, the wide imple­men­ta­ti­on of AI liter­a­cy mea­su­res and the intro­duc­tion of appro­pria­te fol­low-up actions could con­tri­bu­te to impro­ving working con­di­ti­ons and ulti­m­ate­ly sus­tain the con­so­li­da­ti­on, and inno­va­ti­on path of trust­wor­t­hy AI in the Uni­on. The Euro­pean Arti­fi­ci­al Intel­li­gence Board (the ‘Board’) should sup­port the Com­mis­si­on, to pro­mo­te AI liter­a­cy tools, public awa­re­ness and under­stan­ding of the bene­fits, risks, safe­guards, rights and obli­ga­ti­ons in rela­ti­on to the use of AI systems. In coope­ra­ti­on with the rele­vant stake­hol­ders, the Com­mis­si­on and the Mem­ber Sta­tes should faci­li­ta­te the dra­wing up of vol­un­t­a­ry codes of con­duct to advan­ce AI liter­a­cy among per­sons deal­ing with the deve­lo­p­ment, ope­ra­ti­on and use of AI.

(57)test­ing in real-world con­di­ti­ons’ means the tem­po­ra­ry test­ing of an AI system for its inten­ded pur­po­se in real-world con­di­ti­ons out­side a labo­ra­to­ry or other­wi­se simu­la­ted envi­ron­ment, with a view to gathe­ring relia­ble and robust data and to asses­sing and veri­fy­ing the con­for­mi­ty of the AI system with the requi­re­ments of this Regu­la­ti­on and it does not qua­li­fy as pla­cing the AI system on the mar­ket or put­ting it into ser­vice within the mea­ning of this Regu­la­ti­on, pro­vi­ded that all the con­di­ti­ons laid down in Artic­le 57 or 60 are fulfilled;

(58)sub­ject’, for the pur­po­se of real-world test­ing, means a natu­ral per­son who par­ti­ci­pa­tes in test­ing in real-world conditions;

(59)infor­med con­sent’ means a subject’s free­ly given, spe­ci­fic, unam­bi­guous and vol­un­t­a­ry expres­si­on of his or her wil­ling­ness to par­ti­ci­pa­te in a par­ti­cu­lar test­ing in real-world con­di­ti­ons, after having been infor­med of all aspects of the test­ing that are rele­vant to the subject’s decis­i­on to participate;

(60)deep fake’ means AI-gene­ra­ted or mani­pu­la­ted image, audio or video con­tent that resem­bles exi­sting per­sons, objects, places, enti­ties or events and would fal­se­ly appear to a per­son to be authen­tic or truthful;

(61)wide­spread inf­rin­ge­ment’ means any act or omis­si­on con­tra­ry to Uni­on law pro­tec­ting the inte­rest of indi­vi­du­als, which:

(a) has har­med or is likely to harm the coll­ec­ti­ve inte­rests of indi­vi­du­als resi­ding in at least two Mem­ber Sta­tes other than the Mem­ber Sta­te in which:

(i) the act or omis­si­on ori­gi­na­ted or took place;

(ii) the pro­vi­der con­cer­ned, or, whe­re appli­ca­ble, its aut­ho­ri­sed repre­sen­ta­ti­ve is loca­ted or estab­lished; or

(iii) the deployer is estab­lished, when the inf­rin­ge­ment is com­mit­ted by the deployer;

(b) has cau­sed, cau­ses or is likely to cau­se harm to the coll­ec­ti­ve inte­rests of indi­vi­du­als and has com­mon fea­tures, inclu­ding the same unlawful prac­ti­ce or the same inte­rest being inf­rin­ged, and is occur­ring con­curr­ent­ly, com­mit­ted by the same ope­ra­tor, in at least three Mem­ber States;

(62)cri­ti­cal infras­truc­tu­re’ means cri­ti­cal infras­truc­tu­re as defi­ned in Artic­le 2, point (4), of Direc­ti­ve (EU) 2022/2557;

(63)gene­ral-pur­po­se AI model’ means an AI model, inclu­ding whe­re such an AI model is trai­ned with a lar­ge amount of data using self-super­vi­si­on at sca­le, that dis­plays signi­fi­cant gene­ra­li­ty and is capa­ble of com­pe­tent­ly per­forming a wide ran­ge of distinct tasks regard­less of the way the model is pla­ced on the mar­ket and that can be inte­gra­ted into a varie­ty of down­stream systems or appli­ca­ti­ons, except AI models that are used for rese­arch, deve­lo­p­ment or pro­to­ty­p­ing acti­vi­ties befo­re they are pla­ced on the market;

(97) The noti­on of gene­ral-pur­po­se AI models should be cle­ar­ly defi­ned and set apart from the noti­on of AI systems to enable legal cer­tain­ty. The defi­ni­ti­on should be based on the key func­tion­al cha­rac­te­ri­stics of a gene­ral-pur­po­se AI model, in par­ti­cu­lar the gene­ra­li­ty and the capa­bi­li­ty to com­pe­tent­ly per­form a wide ran­ge of distinct tasks. The­se models are typi­cal­ly trai­ned on lar­ge amounts of data, through various methods, such as self¬supervised, unsu­per­vi­sed or rein­force­ment lear­ning. Gene­ral-pur­po­se AI models may be pla­ced on the mar­ket in various ways, inclu­ding through libra­ri­es, appli­ca­ti­on pro­gramming inter­faces (APIs), as direct down­load, or as phy­si­cal copy. The­se models may be fur­ther modi­fi­ed or fine-tun­ed into new models. Alt­hough AI models are essen­ti­al com­pon­ents of AI systems, they do not con­sti­tu­te AI systems on their own. AI models requi­re the addi­ti­on of fur­ther com­pon­ents, such as for exam­p­le a user inter­face, to beco­me AI systems. AI models are typi­cal­ly inte­gra­ted into and form part of AI systems. This Regu­la­ti­on pro­vi­des spe­ci­fic rules for gene­ral-pur­po­se AI models and for gene­ral-pur­po­se AI models that pose syste­mic risks, which should app­ly also when the­se models are inte­gra­ted or form part of an AI system. It should be under­s­tood that the obli­ga­ti­ons for the pro­vi­ders of gene­ral-pur­po­se AI models should app­ly once the gene­ral-pur­po­se AI models are pla­ced on the market.

When the pro­vi­der of a gene­ral-pur­po­se AI model inte­gra­tes an own model into its own AI system that is made available on the mar­ket or put into ser­vice, that model should be con­side­red to be pla­ced on the mar­ket and, the­r­e­fo­re, the obli­ga­ti­ons in this Regu­la­ti­on for models should con­ti­n­ue to app­ly in addi­ti­on to tho­se for AI systems. The obli­ga­ti­ons laid down for models should in any case not app­ly when an own model is used for purely inter­nal pro­ce­s­ses that are not essen­ti­al for pro­vi­ding a pro­duct or a ser­vice to third par­ties and the rights of natu­ral per­sons are not affec­ted. Con­side­ring their poten­ti­al signi­fi­cant­ly nega­ti­ve effects, the gene­ral-pur­po­se AI models with syste­mic risk should always be sub­ject to the rele­vant obli­ga­ti­ons under this Regu­la­ti­on. The defi­ni­ti­on should not cover AI models used befo­re their pla­cing on the mar­ket for the sole pur­po­se of rese­arch, deve­lo­p­ment and pro­to­ty­p­ing acti­vi­ties. This is wit­hout pre­ju­di­ce to the obli­ga­ti­on to com­ply with this Regu­la­ti­on when, fol­lo­wing such acti­vi­ties, a model is pla­ced on the market.

(99) Lar­ge gene­ra­ti­ve AI models are a typi­cal exam­p­le for a gene­ral-pur­po­se AI model, given that they allow for fle­xi­ble gene­ra­ti­on of con­tent, such as in the form of text, audio, images or video, that can rea­di­ly accom­mo­da­te a wide ran­ge of distinc­ti­ve tasks.

(64)high-impact capa­bi­li­ties’ means capa­bi­li­ties that match or exce­ed the capa­bi­li­ties recor­ded in the most advan­ced gene­ral-pur­po­se AI models;

(98) Whe­re­as the gene­ra­li­ty of a model could, inter alia, also be deter­mi­ned by a num­ber of para­me­ters, models with at least a bil­li­on of para­me­ters and trai­ned with a lar­ge amount of data using self-super­vi­si­on at sca­le should be con­side­red to dis­play signi­fi­cant gene­ra­li­ty and to com­pe­tent­ly per­form a wide ran­ge of distinc­ti­ve tasks.

(65)syste­mic risk’ means a risk that is spe­ci­fic to the high-impact capa­bi­li­ties of gene­ral-pur­po­se AI models, having a signi­fi­cant impact on the Uni­on mar­ket due to their reach, or due to actu­al or rea­son­ab­ly fore­seeable nega­ti­ve effects on public health, safe­ty, public secu­ri­ty, fun­da­men­tal rights, or the socie­ty as a who­le, that can be pro­pa­ga­ted at sca­le across the value chain; 

(66)gene­ral-pur­po­se AI system’ means an AI system which is based on a gene­ral-pur­po­se AI model and which has the capa­bi­li­ty to ser­ve a varie­ty of pur­po­ses, both for direct use as well as for inte­gra­ti­on in other AI systems;

(100) When a gene­ral-pur­po­se AI model is inte­gra­ted into or forms part of an AI system, this system should be con­side­red to be gene­ral-pur­po­se AI system when, due to this inte­gra­ti­on, this system has the capa­bi­li­ty to ser­ve a varie­ty of pur­po­ses. A gene­ral-pur­po­se AI system can be used direct­ly, or it may be inte­gra­ted into other AI systems.

(67)floa­ting-point ope­ra­ti­on’ means any mathe­ma­ti­cal ope­ra­ti­on or assign­ment invol­ving floa­ting-point num­bers, which are a sub­set of the real num­bers typi­cal­ly repre­sen­ted on com­pu­ters by an inte­ger of fixed pre­cis­i­on sca­led by an inte­ger expo­nent of a fixed base;

(68)down­stream pro­vi­der’ means a pro­vi­der of an AI system, inclu­ding a gene­ral-pur­po­se AI system, which inte­gra­tes an AI model, regard­less of whe­ther the AI model is pro­vi­ded by them­sel­ves and ver­ti­cal­ly inte­gra­ted or pro­vi­ded by ano­ther enti­ty based on con­trac­tu­al relations.

Artic­le 4 AI literacy

Pro­vi­ders and deployers of AI systems shall take mea­su­res to ensu­re, to their best ext­ent, a suf­fi­ci­ent level of AI liter­a­cy of their staff and other per­sons deal­ing with the ope­ra­ti­on and use of AI systems on their behalf, taking into account their tech­ni­cal know­ledge, expe­ri­ence, edu­ca­ti­on and trai­ning and the con­text the AI systems are to be used in, and con­side­ring the per­sons or groups of per­sons on whom the AI systems are to be used.

Chap­ter II Pro­hi­bi­ted AI practices

Artic­le 5 Pro­hi­bi­ted AI Practices

(26) In order to intro­du­ce a pro­por­tio­na­te and effec­ti­ve set of bin­ding rules for AI systems, a cle­ar­ly defi­ned risk-based approach should be fol­lo­wed. That approach should tail­or the type and con­tent of such rules to the inten­si­ty and scope of the risks that AI systems can gene­ra­te. It is the­r­e­fo­re neces­sa­ry to pro­hi­bit cer­tain unac­cep­ta­ble AI prac­ti­ces, to lay down requi­re­ments for high-risk AI systems and obli­ga­ti­ons for the rele­vant ope­ra­tors, and to lay down trans­pa­ren­cy obli­ga­ti­ons for cer­tain AI systems.

(29) AI-enab­led mani­pu­la­ti­ve tech­ni­ques can be used to per­sua­de per­sons to enga­ge in unwan­ted beha­viours, or to decei­ve them by nud­ging them into decis­i­ons in a way that sub­verts and impairs their auto­no­my, decis­i­on-making and free choices. The pla­cing on the mar­ket, the put­ting into ser­vice or the use of cer­tain AI systems with the objec­ti­ve to or the effect of mate­ri­al­ly dis­tort­ing human beha­viour, wher­eby signi­fi­cant harms, in par­ti­cu­lar having suf­fi­ci­ent­ly important adver­se impacts on phy­si­cal, psy­cho­lo­gi­cal health or finan­cial inte­rests are likely to occur, are par­ti­cu­lar­ly dan­ge­rous and should the­r­e­fo­re be pro­hi­bi­ted. Such AI systems deploy sub­li­mi­nal com­pon­ents such as audio, image, video sti­mu­li that per­sons can­not per­cei­ve, as tho­se sti­mu­li are bey­ond human per­cep­ti­on, or other mani­pu­la­ti­ve or decep­ti­ve tech­ni­ques that sub­vert or impair person’s auto­no­my, decis­i­on-making or free choice in ways that peo­p­le are not con­scious­ly awa­re of tho­se tech­ni­ques or, whe­re they are awa­re of them, can still be decei­ved or are not able to con­trol or resist them. This could be faci­li­ta­ted, for exam­p­le, by machi­ne-brain inter­faces or vir­tu­al rea­li­ty as they allow for a hig­her degree of con­trol of what sti­mu­li are pre­sen­ted to per­sons, inso­far as they may mate­ri­al­ly distort their beha­viour in a signi­fi­cant­ly harmful man­ner. In addi­ti­on, AI systems may also other­wi­se exploit the vul­nerabi­li­ties of a per­son or a spe­ci­fic group of per­sons due to their age, disa­bi­li­ty within the mea­ning of Direc­ti­ve (EU) 2019/882 of the Euro­pean Par­lia­ment and of the Coun­cil , or a spe­ci­fic social or eco­no­mic situa­ti­on that is likely to make tho­se per­sons more vul­nerable to explo­ita­ti­on such as per­sons living in extre­me pover­ty, eth­nic or reli­gious minorities. 

Such AI systems can be pla­ced on the mar­ket, put into ser­vice or used with the objec­ti­ve to or the effect of mate­ri­al­ly dis­tort­ing the beha­viour of a per­son and in a man­ner that cau­ses or is rea­son­ab­ly likely to cau­se signi­fi­cant harm to that or ano­ther per­son or groups of per­sons, inclu­ding harms that may be accu­mu­la­ted over time and should the­r­e­fo­re be pro­hi­bi­ted. It may not be pos­si­ble to assu­me that the­re is an inten­ti­on to distort beha­viour whe­re the dis­tor­ti­on results from fac­tors exter­nal to the AI system which are out­side the con­trol of the pro­vi­der or the deployer, name­ly fac­tors that may not be rea­son­ab­ly fore­seeable and the­r­e­fo­re not pos­si­ble for the pro­vi­der or the deployer of the AI system to miti­ga­te. In any case, it is not neces­sa­ry for the pro­vi­der or the deployer to have the inten­ti­on to cau­se signi­fi­cant harm, pro­vi­ded that such harm results from the mani­pu­la­ti­ve or explo­ita­ti­ve AI-enab­led prac­ti­ces. The pro­hi­bi­ti­ons for such AI prac­ti­ces are com­ple­men­ta­ry to the pro­vi­si­ons con­tai­ned in Direc­ti­ve 2005/29/EC of the Euro­pean Par­lia­ment and of the Coun­cil , in par­ti­cu­lar unfair com­mer­cial prac­ti­ces lea­ding to eco­no­mic or finan­cial harms to con­su­mers are pro­hi­bi­ted under all cir­cum­stances, irre­spec­ti­ve of whe­ther they are put in place through AI systems or other­wi­se. The pro­hi­bi­ti­ons of mani­pu­la­ti­ve and explo­ita­ti­ve prac­ti­ces in this Regu­la­ti­on should not affect lawful prac­ti­ces in the con­text of medi­cal tre­at­ment such as psy­cho­lo­gi­cal tre­at­ment of a men­tal dise­a­se or phy­si­cal reha­bi­li­ta­ti­on, when tho­se prac­ti­ces are car­ri­ed out in accordance with the appli­ca­ble law and medi­cal stan­dards, for exam­p­le expli­cit con­sent of the indi­vi­du­als or their legal repre­sen­ta­ti­ves. In addi­ti­on, com­mon and legi­ti­ma­te com­mer­cial prac­ti­ces, for exam­p­le in the field of adver­ti­sing, that com­ply with the appli­ca­ble law should not, in them­sel­ves, be regard­ed as con­sti­tu­ting harmful mani­pu­la­ti­ve AI-enab­led practices.

1. The fol­lo­wing AI prac­ti­ces shall be prohibited:

(a) the pla­cing on the mar­ket, the put­ting into ser­vice or the use of an AI system that deploys sub­li­mi­nal tech­ni­ques bey­ond a person’s con­scious­ness or pur­po­seful­ly mani­pu­la­ti­ve or decep­ti­ve tech­ni­ques, with the objec­ti­ve, or the effect of mate­ri­al­ly dis­tort­ing the beha­viour of a per­son or a group of per­sons by app­re­cia­bly impai­ring their abili­ty to make an infor­med decis­i­on, ther­eby caus­ing them to take a decis­i­on that they would not have other­wi­se taken in a man­ner that cau­ses or is rea­son­ab­ly likely to cau­se that per­son, ano­ther per­son or group of per­sons signi­fi­cant harm; 

(28) Asi­de from the many bene­fi­ci­al uses of AI, it can also be misu­s­ed and pro­vi­de novel and powerful tools for mani­pu­la­ti­ve, explo­ita­ti­ve and social con­trol prac­ti­ces. Such prac­ti­ces are par­ti­cu­lar­ly harmful and abu­si­ve and should be pro­hi­bi­ted becau­se they con­tra­dict Uni­on values of respect for human dignity, free­dom, equa­li­ty, demo­cra­cy and the rule of law and fun­da­men­tal rights enshri­ned in the Char­ter, inclu­ding the right to non-dis­cri­mi­na­ti­on, to data pro­tec­tion and to pri­va­cy and the rights of the child.

(b) the pla­cing on the mar­ket, the put­ting into ser­vice or the use of an AI system that exploits any of the vul­nerabi­li­ties of a natu­ral per­son or a spe­ci­fic group of per­sons due to their age, disa­bi­li­ty or a spe­ci­fic social or eco­no­mic situa­ti­on, with the objec­ti­ve, or the effect, of mate­ri­al­ly dis­tort­ing the beha­viour of that per­son or a per­son belon­ging to that group in a man­ner that cau­ses or is rea­son­ab­ly likely to cau­se that per­son or ano­ther per­son signi­fi­cant harm;

(c) the pla­cing on the mar­ket, the put­ting into ser­vice or the use of AI systems for the eva­lua­ti­on or clas­si­fi­ca­ti­on of natu­ral per­sons or groups of per­sons over a cer­tain peri­od of time based on their social beha­viour or known, infer­red or pre­dic­ted per­so­nal or per­so­na­li­ty cha­rac­te­ri­stics, with the social score lea­ding to eit­her or both of the following:

(i) detri­men­tal or unfa­voura­ble tre­at­ment of cer­tain natu­ral per­sons or groups of per­sons in social con­texts that are unre­la­ted to the con­texts in which the data was ori­gi­nal­ly gene­ra­ted or collected;

(ii) detri­men­tal or unfa­voura­ble tre­at­ment of cer­tain natu­ral per­sons or groups of per­sons that is unju­sti­fi­ed or dis­pro­por­tio­na­te to their social beha­viour or its gravity;

(31) AI systems pro­vi­ding social scoring of natu­ral per­sons by public or pri­va­te actors may lead to dis­cri­mi­na­to­ry out­co­mes and the exclu­si­on of cer­tain groups. They may vio­la­te the right to dignity and non-dis­cri­mi­na­ti­on and the values of equa­li­ty and justi­ce. Such AI systems eva­lua­te or clas­si­fy natu­ral per­sons or groups the­reof on the basis of mul­ti­ple data points rela­ted to their social beha­viour in mul­ti­ple con­texts or known, infer­red or pre­dic­ted per­so­nal or per­so­na­li­ty cha­rac­te­ri­stics over cer­tain peri­ods of time. The social score obtai­ned from such AI systems may lead to the detri­men­tal or unfa­voura­ble tre­at­ment of natu­ral per­sons or who­le groups the­reof in social con­texts, which are unre­la­ted to the con­text in which the data was ori­gi­nal­ly gene­ra­ted or coll­ec­ted or to a detri­men­tal tre­at­ment that is dis­pro­por­tio­na­te or unju­sti­fi­ed to the gra­vi­ty of their social beha­viour. AI systems ent­ail­ing such unac­cep­ta­ble scoring prac­ti­ces and lea­ding to such detri­men­tal or unfa­voura­ble out­co­mes should the­r­e­fo­re be pro­hi­bi­ted. That pro­hi­bi­ti­on should not affect lawful eva­lua­ti­on prac­ti­ces of natu­ral per­sons that are car­ri­ed out for a spe­ci­fic pur­po­se in accordance with Uni­on and natio­nal law.

(d) the pla­cing on the mar­ket, the put­ting into ser­vice for this spe­ci­fic pur­po­se, or the use of an AI system for making risk assess­ments of natu­ral per­sons in order to assess or pre­dict the risk of a natu­ral per­son com­mit­ting a cri­mi­nal offence, based sole­ly on the pro­fil­ing of a natu­ral per­son or on asses­sing their per­so­na­li­ty traits and cha­rac­te­ri­stics; this pro­hi­bi­ti­on shall not app­ly to AI systems used to sup­port the human assess­ment of the invol­vement of a per­son in a cri­mi­nal acti­vi­ty, which is alre­a­dy based on objec­ti­ve and veri­fia­ble facts direct­ly lin­ked to a cri­mi­nal activity;

(42) In line with the pre­sump­ti­on of inno­cence, natu­ral per­sons in the Uni­on should always be jud­ged on their actu­al beha­viour. Natu­ral per­sons should never be jud­ged on AI-pre­dic­ted beha­viour based sole­ly on their pro­fil­ing, per­so­na­li­ty traits or cha­rac­te­ri­stics, such as natio­na­li­ty, place of birth, place of resi­dence, num­ber of child­ren, level of debt or type of car, wit­hout a rea­sonable sus­pi­ci­on of that per­son being invol­ved in a cri­mi­nal acti­vi­ty based on objec­ti­ve veri­fia­ble facts and wit­hout human assess­ment the­reof. The­r­e­fo­re, risk assess­ments car­ri­ed out with regard to natu­ral per­sons in order to assess the likeli­hood of their offen­ding or to pre­dict the occur­rence of an actu­al or poten­ti­al cri­mi­nal offence based sole­ly on pro­fil­ing them or on asses­sing their per­so­na­li­ty traits and cha­rac­te­ri­stics should be pro­hi­bi­ted. In any case, that pro­hi­bi­ti­on does not refer to or touch upon risk ana­ly­tics that are not based on the pro­fil­ing of indi­vi­du­als or on the per­so­na­li­ty traits and cha­rac­te­ri­stics of indi­vi­du­als, such as AI systems using risk ana­ly­tics to assess the likeli­hood of finan­cial fraud by under­ta­kings on the basis of sus­pi­cious tran­sac­tions or risk ana­ly­tic tools to pre­dict the likeli­hood of the loca­li­sa­ti­on of nar­co­tics or illi­cit goods by cus­toms aut­ho­ri­ties, for exam­p­le on the basis of known traf­ficking routes.

(e) the pla­cing on the mar­ket, the put­ting into ser­vice for this spe­ci­fic pur­po­se, or the use of AI systems that crea­te or expand facial reco­gni­ti­on data­ba­ses through the unt­ar­ge­ted scra­ping of facial images from the inter­net or CCTV footage;

(43) The pla­cing on the mar­ket, the put­ting into ser­vice for that spe­ci­fic pur­po­se, or the use of AI systems that crea­te or expand facial reco­gni­ti­on data­ba­ses through the unt­ar­ge­ted scra­ping of facial images from the inter­net or CCTV foota­ge, should be pro­hi­bi­ted becau­se that prac­ti­ce adds to the fee­ling of mass sur­veil­lan­ce and can lead to gross vio­la­ti­ons of fun­da­men­tal rights, inclu­ding the right to privacy.

(f) the pla­cing on the mar­ket, the put­ting into ser­vice for this spe­ci­fic pur­po­se, or the use of AI systems to infer emo­ti­ons of a natu­ral per­son in the are­as of work­place and edu­ca­ti­on insti­tu­ti­ons, except whe­re the use of the AI system is inten­ded to be put in place or into the mar­ket for medi­cal or safe­ty reasons; 

(44) The­re are serious con­cerns about the sci­en­ti­fic basis of AI systems aiming to iden­ti­fy or infer emo­ti­ons, par­ti­cu­lar­ly as expres­si­on of emo­ti­ons vary con­sider­a­b­ly across cul­tures and situa­tions, and even within a sin­gle indi­vi­du­al. Among the key short­co­mings of such systems are the limi­t­ed relia­bi­li­ty, the lack of spe­ci­fi­ci­ty and the limi­t­ed gene­ra­li­sa­bi­li­ty. The­r­e­fo­re, AI systems iden­ti­fy­ing or infer­ring emo­ti­ons or inten­ti­ons of natu­ral per­sons on the basis of their bio­me­tric data may lead to dis­cri­mi­na­to­ry out­co­mes and can be intru­si­ve to the rights and free­doms of the con­cer­ned per­sons. Con­side­ring the imba­lan­ce of power in the con­text of work or edu­ca­ti­on, com­bi­ned with the intru­si­ve natu­re of the­se systems, such systems could lead to detri­men­tal or unfa­voura­ble tre­at­ment of cer­tain natu­ral per­sons or who­le groups the­reof. The­r­e­fo­re, the pla­cing on the mar­ket, the put­ting into ser­vice, or the use of AI systems inten­ded to be used to detect the emo­tio­nal sta­te of indi­vi­du­als in situa­tions rela­ted to the work­place and edu­ca­ti­on should be pro­hi­bi­ted. That pro­hi­bi­ti­on should not cover AI systems pla­ced on the mar­ket strict­ly for medi­cal or safe­ty rea­sons, such as systems inten­ded for the­ra­peu­ti­cal use.

(g) the pla­cing on the mar­ket, the put­ting into ser­vice for this spe­ci­fic pur­po­se, or the use of bio­me­tric cate­go­ri­sa­ti­on systems that cate­go­ri­se indi­vi­du­al­ly natu­ral per­sons based on their bio­me­tric data to dedu­ce or infer their race, poli­ti­cal opi­ni­ons, trade uni­on mem­ber­ship, reli­gious or phi­lo­so­phi­cal beliefs, sex life or sexu­al ori­en­ta­ti­on; this pro­hi­bi­ti­on does not cover any label­ling or fil­te­ring of lawful­ly acqui­red bio­me­tric data­sets, such as images, based on bio­me­tric data or cate­go­ri­zing of bio­me­tric data in the area of law enforcement;

(30) Bio­me­tric cate­go­ri­sa­ti­on systems that are based on natu­ral per­sons’ bio­me­tric data, such as an indi­vi­du­al person’s face or fin­ger­print, to dedu­ce or infer an indi­vi­du­als’ poli­ti­cal opi­ni­ons, trade uni­on mem­ber­ship, reli­gious or phi­lo­so­phi­cal beliefs, race, sex life or sexu­al ori­en­ta­ti­on should be pro­hi­bi­ted. That pro­hi­bi­ti­on should not cover the lawful label­ling, fil­te­ring or cate­go­ri­sa­ti­on of bio­me­tric data sets acqui­red in line with Uni­on or natio­nal law accor­ding to bio­me­tric data, such as the sort­ing of images accor­ding to hair colour or eye colour, which can for exam­p­le be used in the area of law enforcement.

(40) In accordance with Artic­le 6a of Pro­to­col No 21 on the posi­ti­on of the United King­dom and Ire­land in respect of the area of free­dom, secu­ri­ty and justi­ce, as anne­xed to the TEU and to the TFEU, Ire­land is not bound by the rules laid down in Artic­le 5(1), first sub­pa­ra­graph, point (g), to the ext­ent it applies to the use of bio­me­tric cate­go­ri­sa­ti­on systems for acti­vi­ties in the field of poli­ce coope­ra­ti­on and judi­cial coope­ra­ti­on in cri­mi­nal mat­ters, Artic­le 5(1), first sub­pa­ra­graph, point (d), to the ext­ent it applies to the use of AI systems cover­ed by that pro­vi­si­on, Artic­le 5(1), first sub­pa­ra­graph, point (h), Artic­le 5(2) to (6) and Artic­le 26(10) of this Regu­la­ti­on adopted on the basis of Artic­le 16 TFEU which rela­te to the pro­ce­s­sing of per­so­nal data by the Mem­ber Sta­tes when car­ry­ing out acti­vi­ties fal­ling within the scope of Chap­ter 4 or Chap­ter 5 of Tit­le V of Part Three of the TFEU, whe­re Ire­land is not bound by the rules gover­ning the forms of judi­cial coope­ra­ti­on in cri­mi­nal mat­ters or poli­ce coope­ra­ti­on which requi­re com­pli­ance with the pro­vi­si­ons laid down on the basis of Artic­le 16 TFEU.

(41) In accordance with Artic­les 2 and 2a of Pro­to­col No 22 on the posi­ti­on of Den­mark, anne­xed to the TEU and to the TFEU, Den­mark is not bound by rules laid down in Artic­le 5(1), first sub­pa­ra­graph, point (g), to the ext­ent it applies to the use of bio­me­tric cate­go­ri­sa­ti­on systems for acti­vi­ties in the field of poli­ce coope­ra­ti­on and judi­cial coope­ra­ti­on in cri­mi­nal mat­ters, Artic­le 5(1), first sub­pa­ra­graph, point (d), to the ext­ent it applies to the use of AI systems cover­ed by that pro­vi­si­on, Artic­le 5(1), first sub­pa­ra­graph, point (h), (2) to (6) and Artic­le 26(10) of this Regu­la­ti­on adopted on the basis of Artic­le 16 TFEU, or sub­ject to their appli­ca­ti­on, which rela­te to the pro­ce­s­sing of per­so­nal data by the Mem­ber Sta­tes when car­ry­ing out acti­vi­ties fal­ling within the scope of Chap­ter 4 or Chap­ter 5 of Tit­le V of Part Three of the TFEU.

(h) the use of ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on systems in publicly acce­s­si­ble spaces for the pur­po­ses of law enforce­ment, unless and in so far as such use is strict­ly neces­sa­ry for one of the fol­lo­wing objectives:

(i) the tar­ge­ted search for spe­ci­fic vic­tims of abduc­tion, traf­ficking in human beings or sexu­al explo­ita­ti­on of human beings, as well as the search for miss­ing persons; 

(ii) the pre­ven­ti­on of a spe­ci­fic, sub­stan­ti­al and immi­nent thre­at to the life or phy­si­cal safe­ty of natu­ral per­sons or a genui­ne and pre­sent or genui­ne and fore­seeable thre­at of a ter­ro­rist attack;

(iii) the loca­li­sa­ti­on or iden­ti­fi­ca­ti­on of a per­son suspec­ted of having com­mit­ted a cri­mi­nal offence, for the pur­po­se of con­duc­ting a cri­mi­nal inve­sti­ga­ti­on or pro­se­cu­ti­on or exe­cu­ting a cri­mi­nal penal­ty for offen­ces refer­red to in Annex II and punis­ha­ble in the Mem­ber Sta­te con­cer­ned by a cus­to­di­al sen­tence or a detenti­on order for a maxi­mum peri­od of at least four years.

Point (h) of the first sub­pa­ra­graph is wit­hout pre­ju­di­ce to Artic­le 9 of Regu­la­ti­on (EU) 2016/679 for the pro­ce­s­sing of bio­me­tric data for pur­po­ses other than law enforcement. 

(32) The use of AI systems for ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on of natu­ral per­sons in publicly acce­s­si­ble spaces for the pur­po­se of law enforce­ment is par­ti­cu­lar­ly intru­si­ve to the rights and free­doms of the con­cer­ned per­sons, to the ext­ent that it may affect the pri­va­te life of a lar­ge part of the popu­la­ti­on, evo­ke a fee­ling of con­stant sur­veil­lan­ce and indi­rect­ly dissua­de the exer­cise of the free­dom of assem­bly and other fun­da­men­tal rights. Tech­ni­cal inac­cu­ra­ci­es of AI systems inten­ded for the remo­te bio­me­tric iden­ti­fi­ca­ti­on of natu­ral per­sons can lead to bia­sed results and ent­ail dis­cri­mi­na­to­ry effects. Such pos­si­ble bia­sed results and dis­cri­mi­na­to­ry effects are par­ti­cu­lar­ly rele­vant with regard to age, eth­ni­ci­ty, race, sex or disa­bi­li­ties. In addi­ti­on, the imme­dia­cy of the impact and the limi­t­ed oppor­tu­ni­ties for fur­ther checks or cor­rec­tions in rela­ti­on to the use of such systems ope­ra­ting in real-time car­ry heigh­ten­ed risks for the rights and free­doms of the per­sons con­cer­ned in the con­text of, or impac­ted by, law enforce­ment activities.

(33) The use of tho­se systems for the pur­po­se of law enforce­ment should the­r­e­fo­re be pro­hi­bi­ted, except in exhaus­tively listed and nar­row­ly defi­ned situa­tions, whe­re the use is strict­ly neces­sa­ry to achie­ve a sub­stan­ti­al public inte­rest, the importance of which out­weighs the risks. Tho­se situa­tions invol­ve the search for cer­tain vic­tims of crime inclu­ding miss­ing per­sons; cer­tain thre­ats to the life or to the phy­si­cal safe­ty of natu­ral per­sons or of a ter­ro­rist attack; and the loca­li­sa­ti­on or iden­ti­fi­ca­ti­on of per­pe­tra­tors or suspects of the cri­mi­nal offen­ces listed in an annex to this Regu­la­ti­on, whe­re tho­se cri­mi­nal offen­ces are punis­ha­ble in the Mem­ber Sta­te con­cer­ned by a cus­to­di­al sen­tence or a detenti­on order for a maxi­mum peri­od of at least four years and as they are defi­ned in the law of that Mem­ber Sta­te. Such a thres­hold for the cus­to­di­al sen­tence or detenti­on order in accordance with natio­nal law con­tri­bu­tes to ensu­ring that the offence should be serious enough to poten­ti­al­ly justi­fy the use of ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on systems. 

Moreo­ver, the list of cri­mi­nal offen­ces pro­vi­ded in an annex to this Regu­la­ti­on is based on the 32 cri­mi­nal offen­ces listed in the Coun­cil Frame­work Decis­i­on 2002/584/JHA , taking into account that some of tho­se offen­ces are, in prac­ti­ce, likely to be more rele­vant than others, in that the recour­se to ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on could, fore­see­ab­ly, be neces­sa­ry and pro­por­tio­na­te to high­ly vary­ing degrees for the prac­ti­cal pur­su­it of the loca­li­sa­ti­on or iden­ti­fi­ca­ti­on of a per­pe­tra­tor or suspect of the dif­fe­rent cri­mi­nal offen­ces listed and having regard to the likely dif­fe­ren­ces in the serious­ness, pro­ba­bi­li­ty and sca­le of the harm or pos­si­ble nega­ti­ve con­se­quen­ces. An immi­nent thre­at to life or the phy­si­cal safe­ty of natu­ral per­sons could also result from a serious dis­rup­ti­on of cri­ti­cal infras­truc­tu­re, as defi­ned in Artic­le 2, point (4) of Direc­ti­ve (EU) 2022/2557 of the Euro­pean Par­lia­ment and of the Coun­cil , whe­re the dis­rup­ti­on or des­truc­tion of such cri­ti­cal infras­truc­tu­re would result in an immi­nent thre­at to life or the phy­si­cal safe­ty of a per­son, inclu­ding through serious harm to the pro­vi­si­on of basic sup­plies to the popu­la­ti­on or to the exer­cise of the core func­tion of the Sta­te. In addi­ti­on, this Regu­la­ti­on should pre­ser­ve the abili­ty for law enforce­ment, bor­der con­trol, immi­gra­ti­on or asyl­um aut­ho­ri­ties to car­ry out iden­ti­ty checks in the pre­sence of the per­son con­cer­ned in accordance with the con­di­ti­ons set out in Uni­on and natio­nal law for such checks. In par­ti­cu­lar, law enforce­ment, bor­der con­trol, immi­gra­ti­on or asyl­um aut­ho­ri­ties should be able to use infor­ma­ti­on systems, in accordance with Uni­on or natio­nal law, to iden­ti­fy per­sons who, during an iden­ti­ty check, eit­her refu­se to be iden­ti­fi­ed or are unable to sta­te or pro­ve their iden­ti­ty, wit­hout being requi­red by this Regu­la­ti­on to obtain pri­or aut­ho­ri­sa­ti­on. This could be, for exam­p­le, a per­son invol­ved in a crime, being unwil­ling, or unable due to an acci­dent or a medi­cal con­di­ti­on, to dis­c­lo­se their iden­ti­ty to law enforce­ment authorities.

(34) In order to ensu­re that tho­se systems are used in a respon­si­ble and pro­por­tio­na­te man­ner, it is also important to estab­lish that, in each of tho­se exhaus­tively listed and nar­row­ly defi­ned situa­tions, cer­tain ele­ments should be taken into account, in par­ti­cu­lar as regards the natu­re of the situa­ti­on giving rise to the request and the con­se­quen­ces of the use for the rights and free­doms of all per­sons con­cer­ned and the safe­guards and con­di­ti­ons pro­vi­ded for with the use. In addi­ti­on, the use of ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on systems in publicly acce­s­si­ble spaces for the pur­po­se of law enforce­ment should be deployed only to con­firm the spe­ci­fi­cal­ly tar­ge­ted individual’s iden­ti­ty and should be limi­t­ed to what is strict­ly neces­sa­ry con­cer­ning the peri­od of time, as well as the geo­gra­phic and per­so­nal scope, having regard in par­ti­cu­lar to the evi­dence or indi­ca­ti­ons regar­ding the thre­ats, the vic­tims or per­pe­tra­tor. The use of the real-time remo­te bio­me­tric iden­ti­fi­ca­ti­on system in publicly acce­s­si­ble spaces should be aut­ho­ri­sed only if the rele­vant law enforce­ment aut­ho­ri­ty has com­ple­ted a fun­da­men­tal rights impact assess­ment and, unless pro­vi­ded other­wi­se in this Regu­la­ti­on, has regi­stered the system in the data­ba­se as set out in this Regu­la­ti­on. The refe­rence data­ba­se of per­sons should be appro­pria­te for each use case in each of the situa­tions men­tio­ned above.

(94) Any pro­ce­s­sing of bio­me­tric data invol­ved in the use of AI systems for bio­me­tric iden­ti­fi­ca­ti­on for the pur­po­se of law enforce­ment needs to com­ply with Artic­le 10 of Direc­ti­ve (EU) 2016/680, that allo­ws such pro­ce­s­sing only whe­re strict­ly neces­sa­ry, sub­ject to appro­pria­te safe­guards for the rights and free­doms of the data sub­ject, and whe­re aut­ho­ri­sed by Uni­on or Mem­ber Sta­te law. Such use, when aut­ho­ri­sed, also needs to respect the prin­ci­ples laid down in Artic­le 4 (1) of Direc­ti­ve (EU) 2016/680 inclu­ding lawful­ness, fair­ness and trans­pa­ren­cy, pur­po­se limi­ta­ti­on, accu­ra­cy and sto­rage limitation.

2. The use of ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on systems in publicly acce­s­si­ble spaces for the pur­po­ses of law enforce­ment for any of the objec­ti­ves refer­red to in para­graph 1, first sub­pa­ra­graph, point (h), shall be deployed for the pur­po­ses set out in that point only to con­firm the iden­ti­ty of the spe­ci­fi­cal­ly tar­ge­ted indi­vi­du­al, and it shall take into account the fol­lo­wing elements:

(a) the natu­re of the situa­ti­on giving rise to the pos­si­ble use, in par­ti­cu­lar the serious­ness, pro­ba­bi­li­ty and sca­le of the harm that would be cau­sed if the system were not used;

(b) the con­se­quen­ces of the use of the system for the rights and free­doms of all per­sons con­cer­ned, in par­ti­cu­lar the serious­ness, pro­ba­bi­li­ty and sca­le of tho­se consequences.

In addi­ti­on, the use of ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on systems in publicly acce­s­si­ble spaces for the pur­po­ses of law enforce­ment for any of the objec­ti­ves refer­red to in para­graph 1, first sub­pa­ra­graph, point (h), of this Artic­le shall com­ply with neces­sa­ry and pro­por­tio­na­te safe­guards and con­di­ti­ons in rela­ti­on to the use in accordance with the natio­nal law aut­ho­ri­sing the use the­reof, in par­ti­cu­lar as regards the tem­po­ral, geo­gra­phic and per­so­nal limi­ta­ti­ons. The use of the ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on system in publicly acce­s­si­ble spaces shall be aut­ho­ri­sed only if the law enforce­ment aut­ho­ri­ty has com­ple­ted a fun­da­men­tal rights impact assess­ment as pro­vi­ded for in Artic­le 27 and has regi­stered the system in the EU data­ba­se accor­ding to Artic­le 49. Howe­ver, in duly justi­fi­ed cases of urgen­cy, the use of such systems may be com­men­ced wit­hout the regi­stra­ti­on in the EU data­ba­se, pro­vi­ded that such regi­stra­ti­on is com­ple­ted wit­hout undue delay.

(35) Each use of a ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on system in publicly acce­s­si­ble spaces for the pur­po­se of law enforce­ment should be sub­ject to an express and spe­ci­fic aut­ho­ri­sa­ti­on by a judi­cial aut­ho­ri­ty or by an inde­pen­dent admi­ni­stra­ti­ve aut­ho­ri­ty of a Mem­ber Sta­te who­se decis­i­on is bin­ding. Such aut­ho­ri­sa­ti­on should, in prin­ci­ple, be obtai­ned pri­or to the use of the AI system with a view to iden­ti­fy­ing a per­son or per­sons. Excep­ti­ons to that rule should be allo­wed in duly justi­fi­ed situa­tions on grounds of urgen­cy, name­ly in situa­tions whe­re the need to use the systems con­cer­ned is such as to make it effec­tively and objec­tively impos­si­ble to obtain an aut­ho­ri­sa­ti­on befo­re com­men­cing the use of the AI system. In such situa­tions of urgen­cy, the use of the AI system should be rest­ric­ted to the abso­lu­te mini­mum neces­sa­ry and should be sub­ject to appro­pria­te safe­guards and con­di­ti­ons, as deter­mi­ned in natio­nal law and spe­ci­fi­ed in the con­text of each indi­vi­du­al urgent use case by the law enforce­ment aut­ho­ri­ty its­elf. In addi­ti­on, the law enforce­ment aut­ho­ri­ty should in such situa­tions request such aut­ho­ri­sa­ti­on while pro­vi­ding the rea­sons for not having been able to request it ear­lier, wit­hout undue delay and at the latest within 24 hours. If such an aut­ho­ri­sa­ti­on is rejec­ted, the use of real-time bio­me­tric iden­ti­fi­ca­ti­on systems lin­ked to that aut­ho­ri­sa­ti­on should cea­se with imme­dia­te effect and all the data rela­ted to such use should be dis­card­ed and dele­ted. Such data inclu­des input data direct­ly acqui­red by an AI system in the cour­se of the use of such system as well as the results and out­puts of the use lin­ked to that aut­ho­ri­sa­ti­on. It should not include input that is legal­ly acqui­red in accordance with ano­ther Uni­on or natio­nal law. In any case, no decis­i­on pro­du­cing an adver­se legal effect on a per­son should be taken based sole­ly on the out­put of the remo­te bio­me­tric iden­ti­fi­ca­ti­on system.

3. For the pur­po­ses of para­graph 1, first sub­pa­ra­graph, point (h) and para­graph 2, each use for the pur­po­ses of law enforce­ment of a ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on system in publicly acce­s­si­ble spaces shall be sub­ject to a pri­or aut­ho­ri­sa­ti­on gran­ted by a judi­cial aut­ho­ri­ty or an inde­pen­dent admi­ni­stra­ti­ve aut­ho­ri­ty who­se decis­i­on is bin­ding of the Mem­ber Sta­te in which the use is to take place, issued upon a rea­so­ned request and in accordance with the detail­ed rules of natio­nal law refer­red to in para­graph 5. Howe­ver, in a duly justi­fi­ed situa­ti­on of urgen­cy, the use of such system may be com­men­ced wit­hout an aut­ho­ri­sa­ti­on pro­vi­ded that such aut­ho­ri­sa­ti­on is reque­sted wit­hout undue delay, at the latest within 24 hours. If such aut­ho­ri­sa­ti­on is rejec­ted, the use shall be stop­ped with imme­dia­te effect and all the data, as well as the results and out­puts of that use shall be imme­dia­te­ly dis­card­ed and deleted.

The com­pe­tent judi­cial aut­ho­ri­ty or an inde­pen­dent admi­ni­stra­ti­ve aut­ho­ri­ty who­se decis­i­on is bin­ding shall grant the aut­ho­ri­sa­ti­on only whe­re it is satis­fied, on the basis of objec­ti­ve evi­dence or clear indi­ca­ti­ons pre­sen­ted to it, that the use of the ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on system con­cer­ned is neces­sa­ry for, and pro­por­tio­na­te to, achie­ving one of the objec­ti­ves spe­ci­fi­ed in para­graph 1, first sub­pa­ra­graph, point (h), as iden­ti­fi­ed in the request and, in par­ti­cu­lar, remains limi­t­ed to what is strict­ly neces­sa­ry con­cer­ning the peri­od of time as well as the geo­gra­phic and per­so­nal scope. In deci­ding on the request, that aut­ho­ri­ty shall take into account the ele­ments refer­red to in para­graph 2. No decis­i­on that pro­du­ces an adver­se legal effect on a per­son may be taken based sole­ly on the out­put of the ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on system.

4. Wit­hout pre­ju­di­ce to para­graph 3, each use of a ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on system in publicly acce­s­si­ble spaces for law enforce­ment pur­po­ses shall be noti­fi­ed to the rele­vant mar­ket sur­veil­lan­ce aut­ho­ri­ty and the natio­nal data pro­tec­tion aut­ho­ri­ty in accordance with the natio­nal rules refer­red to in para­graph 5. The noti­fi­ca­ti­on shall, as a mini­mum, con­tain the infor­ma­ti­on spe­ci­fi­ed under para­graph 6 and shall not include sen­si­ti­ve ope­ra­tio­nal data.

5. A Mem­ber Sta­te may deci­de to pro­vi­de for the pos­si­bi­li­ty to ful­ly or par­ti­al­ly aut­ho­ri­se the use of ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on systems in publicly acce­s­si­ble spaces for the pur­po­ses of law enforce­ment within the limits and under the con­di­ti­ons listed in para­graph 1, first sub­pa­ra­graph, point (h), and para­graphs 2 and 3. Mem­ber Sta­tes con­cer­ned shall lay down in their natio­nal law the neces­sa­ry detail­ed rules for the request, issu­an­ce and exer­cise of, as well as super­vi­si­on and report­ing rela­ting to, the aut­ho­ri­sa­ti­ons refer­red to in para­graph 3. Tho­se rules shall also spe­ci­fy in respect of which of the objec­ti­ves listed in para­graph 1, first sub­pa­ra­graph, point (h), inclu­ding which of the cri­mi­nal offen­ces refer­red to in point (h)(iii) the­reof, the com­pe­tent aut­ho­ri­ties may be aut­ho­ri­sed to use tho­se systems for the pur­po­ses of law enforce­ment. Mem­ber Sta­tes shall noti­fy tho­se rules to the Com­mis­si­on at the latest 30 days fol­lo­wing the adop­ti­on the­reof. Mem­ber Sta­tes may intro­du­ce, in accordance with Uni­on law, more rest­ric­ti­ve laws on the use of remo­te bio­me­tric iden­ti­fi­ca­ti­on systems.

(37) Fur­ther­mo­re, it is appro­pria­te to pro­vi­de, within the exhaus­ti­ve frame­work set by this Regu­la­ti­on that such use in the ter­ri­to­ry of a Mem­ber Sta­te in accordance with this Regu­la­ti­on should only be pos­si­ble whe­re and in as far as the Mem­ber Sta­te con­cer­ned has deci­ded to express­ly pro­vi­de for the pos­si­bi­li­ty to aut­ho­ri­se such use in its detail­ed rules of natio­nal law. Con­se­quent­ly, Mem­ber Sta­tes remain free under this Regu­la­ti­on not to pro­vi­de for such a pos­si­bi­li­ty at all or to only pro­vi­de for such a pos­si­bi­li­ty in respect of some of the objec­ti­ves capa­ble of justi­fy­ing aut­ho­ri­sed use iden­ti­fi­ed in this Regu­la­ti­on. Such natio­nal rules should be noti­fi­ed to the Com­mis­si­on within 30 days of their adoption. 

(38) The use of AI systems for real-time remo­te bio­me­tric iden­ti­fi­ca­ti­on of natu­ral per­sons in publicly acce­s­si­ble spaces for the pur­po­se of law enforce­ment neces­s­a­ri­ly invol­ves the pro­ce­s­sing of bio­me­tric data. The rules of this Regu­la­ti­on that pro­hi­bit, sub­ject to cer­tain excep­ti­ons, such use, which are based on Artic­le 16 TFEU, should app­ly as lex spe­cia­lis in respect of the rules on the pro­ce­s­sing of bio­me­tric data con­tai­ned in Artic­le 10 of Direc­ti­ve (EU) 2016/680, thus regu­la­ting such use and the pro­ce­s­sing of bio­me­tric data invol­ved in an exhaus­ti­ve man­ner. The­r­e­fo­re, such use and pro­ce­s­sing should be pos­si­ble only in as far as it is com­pa­ti­ble with the frame­work set by this Regu­la­ti­on, wit­hout the­re being scope, out­side that frame­work, for the com­pe­tent aut­ho­ri­ties, whe­re they act for pur­po­se of law enforce­ment, to use such systems and pro­cess such data in con­nec­tion the­re­to on the grounds listed in Artic­le 10 of Direc­ti­ve (EU) 2016/680. In that con­text, this Regu­la­ti­on is not inten­ded to pro­vi­de the legal basis for the pro­ce­s­sing of per­so­nal data under Artic­le 8 of Direc­ti­ve (EU) 2016/680. Howe­ver, the use of real-time remo­te bio­me­tric iden­ti­fi­ca­ti­on systems in publicly acce­s­si­ble spaces for pur­po­ses other than law enforce­ment, inclu­ding by com­pe­tent aut­ho­ri­ties, should not be cover­ed by the spe­ci­fic frame­work regar­ding such use for the pur­po­se of law enforce­ment set by this Regu­la­ti­on. Such use for pur­po­ses other than law enforce­ment should the­r­e­fo­re not be sub­ject to the requi­re­ment of an aut­ho­ri­sa­ti­on under this Regu­la­ti­on and the appli­ca­ble detail­ed rules of natio­nal law that may give effect to that authorisation.

(39) Any pro­ce­s­sing of bio­me­tric data and other per­so­nal data invol­ved in the use of AI systems for bio­me­tric iden­ti­fi­ca­ti­on, other than in con­nec­tion to the use of real-time remo­te bio­me­tric iden­ti­fi­ca­ti­on systems in publicly acce­s­si­ble spaces for the pur­po­se of law enforce­ment as regu­la­ted by this Regu­la­ti­on, should con­ti­n­ue to com­ply with all requi­re­ments resul­ting from Artic­le 10 of Direc­ti­ve (EU) 2016/680. For pur­po­ses other than law enforce­ment, Artic­le 9(1) of Regu­la­ti­on (EU) 2016/679 and Artic­le 10(1) of Regu­la­ti­on (EU) 2018/1725 pro­hi­bit the pro­ce­s­sing of bio­me­tric data sub­ject to limi­t­ed excep­ti­ons as pro­vi­ded in tho­se Artic­les. In the appli­ca­ti­on of Artic­le 9(1) of Regu­la­ti­on (EU) 2016/679, the use of remo­te bio­me­tric iden­ti­fi­ca­ti­on for pur­po­ses other than law enforce­ment has alre­a­dy been sub­ject to pro­hi­bi­ti­on decis­i­ons by natio­nal data pro­tec­tion authorities. 

6. Natio­nal mar­ket sur­veil­lan­ce aut­ho­ri­ties and the natio­nal data pro­tec­tion aut­ho­ri­ties of Mem­ber Sta­tes that have been noti­fi­ed of the use of ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on systems in publicly acce­s­si­ble spaces for law enforce­ment pur­po­ses pur­su­ant to para­graph 4 shall sub­mit to the Com­mis­si­on annu­al reports on such use. For that pur­po­se, the Com­mis­si­on shall pro­vi­de Mem­ber Sta­tes and natio­nal mar­ket sur­veil­lan­ce and data pro­tec­tion aut­ho­ri­ties with a tem­p­la­te, inclu­ding infor­ma­ti­on on the num­ber of the decis­i­ons taken by com­pe­tent judi­cial aut­ho­ri­ties or an inde­pen­dent admi­ni­stra­ti­ve aut­ho­ri­ty who­se decis­i­on is bin­ding upon requests for aut­ho­ri­sa­ti­ons in accordance with para­graph 3 and their result.

(36) In order to car­ry out their tasks in accordance with the requi­re­ments set out in this Regu­la­ti­on as well as in natio­nal rules, the rele­vant mar­ket sur­veil­lan­ce aut­ho­ri­ty and the natio­nal data pro­tec­tion aut­ho­ri­ty should be noti­fi­ed of each use of the real-time bio­me­tric iden­ti­fi­ca­ti­on system. Mar­ket sur­veil­lan­ce aut­ho­ri­ties and the natio­nal data pro­tec­tion aut­ho­ri­ties that have been noti­fi­ed should sub­mit to the Com­mis­si­on an annu­al report on the use of real-time bio­me­tric iden­ti­fi­ca­ti­on systems.

7. The Com­mis­si­on shall publish annu­al reports on the use of real-time remo­te bio­me­tric iden­ti­fi­ca­ti­on systems in publicly acce­s­si­ble spaces for law enforce­ment pur­po­ses, based on aggre­ga­ted data in Mem­ber Sta­tes on the basis of the annu­al reports refer­red to in para­graph 6. Tho­se annu­al reports shall not include sen­si­ti­ve ope­ra­tio­nal data of the rela­ted law enforce­ment activities.

8. This Artic­le shall not affect the pro­hi­bi­ti­ons that app­ly whe­re an AI prac­ti­ce inf­rin­ges other Uni­on law.

(45) Prac­ti­ces that are pro­hi­bi­ted by Uni­on law, inclu­ding data pro­tec­tion law, non-dis­cri­mi­na­ti­on law, con­su­mer pro­tec­tion law, and com­pe­ti­ti­on law, should not be affec­ted by this Regulation.

Chap­ter III High-risk AI systems

Sec­tion 1 Clas­si­fi­ca­ti­on Of AI Systems As High-Risk

Artic­le 6 Clas­si­fi­ca­ti­on rules for high-risk AI systems

(46) High-risk AI systems should only be pla­ced on the Uni­on mar­ket, put into ser­vice or used if they com­ply with cer­tain man­da­to­ry requi­re­ments. Tho­se requi­re­ments should ensu­re that high-risk AI systems available in the Uni­on or who­se out­put is other­wi­se used in the Uni­on do not pose unac­cep­ta­ble risks to important Uni­on public inte­rests as reco­g­nis­ed and pro­tec­ted by Uni­on law. On the basis of the New Legis­la­ti­ve Frame­work, as cla­ri­fi­ed in the Com­mis­si­on noti­ce “The ‘Blue Gui­de’ on the imple­men­ta­ti­on of EU pro­duct rules 2022”20, the gene­ral rule is that more than one legal act of Uni­on har­mo­ni­sa­ti­on legis­la­ti­on, such as Regu­la­ti­ons (EU) 2017/74521 and (EU) 2017/74622 of the Euro­pean Par­lia­ment and of the Coun­cil or Direc­ti­ve 2006/42/EC of the Euro­pean Par­lia­ment and of the Council23, may be appli­ca­ble to one pro­duct, sin­ce the making available or put­ting into ser­vice can take place only when the pro­duct com­plies with all appli­ca­ble Uni­on har­mo­ni­sa­ti­on legis­la­ti­on. To ensu­re con­si­sten­cy and avo­id unneces­sa­ry admi­ni­stra­ti­ve bur­dens or costs, pro­vi­ders of a pro­duct that con­ta­ins one or more high-risk AI systems, to which the requi­re­ments of this Regu­la­ti­on and of the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in an annex to this Regu­la­ti­on app­ly, should have fle­xi­bi­li­ty with regard to ope­ra­tio­nal decis­i­ons on how to ensu­re com­pli­ance of a pro­duct that con­ta­ins one or more AI systems with all appli­ca­ble requi­re­ments of the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on in an opti­mal man­ner. AI systems iden­ti­fi­ed as high-risk should be limi­t­ed to tho­se that have a signi­fi­cant harmful impact on the health, safe­ty and fun­da­men­tal rights of per­sons in the Uni­on and such limi­ta­ti­on should mini­mi­se any poten­ti­al rest­ric­tion to inter­na­tio­nal trade.

(166) It is important that AI systems rela­ted to pro­ducts that are not high-risk in accordance with this Regu­la­ti­on and thus are not requi­red to com­ply with the requi­re­ments set out for high-risk AI systems are nevert­hel­ess safe when pla­ced on the mar­ket or put into ser­vice. To con­tri­bu­te to this objec­ti­ve, Regu­la­ti­on (EU) 2023/988 of the Euro­pean Par­lia­ment and of the Coun­cil would app­ly as a safe­ty net.

1. Irre­spec­ti­ve of whe­ther an AI system is pla­ced on the mar­ket or put into ser­vice inde­pendent­ly of the pro­ducts refer­red to in points (a) and (b), that AI system shall be con­side­red to be high-risk whe­re both of the fol­lo­wing con­di­ti­ons are fulfilled:

(a) the AI system is inten­ded to be used as a safe­ty com­po­nent of a pro­duct, or the AI system is its­elf a pro­duct, cover­ed by the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Annex I;

(49) As regards high-risk AI systems that are safe­ty com­pon­ents of pro­ducts or systems, or which are them­sel­ves pro­ducts or systems fal­ling within the scope of Regu­la­ti­on (EC) No 300/2008 of the Euro­pean Par­lia­ment and of the Coun­cil , Regu­la­ti­on (EU) No 167/2013 of the Euro­pean Par­lia­ment and of the Coun­cil , Regu­la­ti­on (EU) No 168/2013 of the Euro­pean Par­lia­ment and of the Coun­cil , Direc­ti­ve 2014/90/EU of the Euro­pean Par­lia­ment and of the Coun­cil , Direc­ti­ve (EU) 2016/797 of the Euro­pean Par­lia­ment and of the Coun­cil , Regu­la­ti­on (EU) 2018/858 of the Euro­pean Par­lia­ment and of the Coun­cil , Regu­la­ti­on (EU) 2018/1139 of the Euro­pean Par­lia­ment and of the Coun­cil , and Regu­la­ti­on (EU) 2019/2144 of the Euro­pean Par­lia­ment and of the Coun­cil, it is appro­pria­te to amend tho­se acts to ensu­re that the Com­mis­si­on takes into account, on the basis of the tech­ni­cal and regu­la­to­ry spe­ci­fi­ci­ties of each sec­tor, and wit­hout inter­fe­ring with exi­sting gover­nan­ce, con­for­mi­ty assess­ment and enforce­ment mecha­nisms and aut­ho­ri­ties estab­lished the­r­ein, the man­da­to­ry requi­re­ments for high-risk AI systems laid down in this Regu­la­ti­on when adop­ting any rele­vant dele­ga­ted or imple­men­ting acts on the basis of tho­se acts. 

(b) the pro­duct who­se safe­ty com­po­nent pur­su­ant to point (a) is the AI system, or the AI system its­elf as a pro­duct, is requi­red to under­go a third-par­ty con­for­mi­ty assess­ment, with a view to the pla­cing on the mar­ket or the put­ting into ser­vice of that pro­duct pur­su­ant to the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Annex I.

(50) As regards AI systems that are safe­ty com­pon­ents of pro­ducts, or which are them­sel­ves pro­ducts, fal­ling within the scope of cer­tain Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in an annex to this Regu­la­ti­on, it is appro­pria­te to clas­si­fy them as high-risk under this Regu­la­ti­on if the pro­duct con­cer­ned under­goes the con­for­mi­ty assess­ment pro­ce­du­re with a third-par­ty con­for­mi­ty assess­ment body pur­su­ant to that rele­vant Uni­on har­mo­ni­sa­ti­on legis­la­ti­on. In par­ti­cu­lar, such pro­ducts are machi­nery, toys, lifts, equip­ment and pro­tec­ti­ve systems inten­ded for use in poten­ti­al­ly explo­si­ve atmo­sphe­res, radio equip­ment, pres­su­re equip­ment, recrea­tio­nal craft equip­ment, cable­way instal­la­ti­ons, appli­ances bur­ning gas­eous fuels, medi­cal devices, in vitro dia­gno­stic medi­cal devices, auto­mo­ti­ve and aviation.

(51) The clas­si­fi­ca­ti­on of an AI system as high-risk pur­su­ant to this Regu­la­ti­on should not neces­s­a­ri­ly mean that the pro­duct who­se safe­ty com­po­nent is the AI system, or the AI system its­elf as a pro­duct, is con­side­red to be high-risk under the cri­te­ria estab­lished in the rele­vant Uni­on har­mo­ni­sa­ti­on legis­la­ti­on that applies to the pro­duct. This is, in par­ti­cu­lar, the case for Regu­la­ti­ons (EU) 2017/745 and (EU) 2017/746, whe­re a third-par­ty con­for­mi­ty assess­ment is pro­vi­ded for medi­um-risk and high-risk products.

(52) As regards stand-alo­ne AI systems, name­ly high-risk AI systems other than tho­se that are safe­ty com­pon­ents of pro­ducts, or that are them­sel­ves pro­ducts, it is appro­pria­te to clas­si­fy them as high-risk if, in light of their inten­ded pur­po­se, they pose a high risk of harm to the health and safe­ty or the fun­da­men­tal rights of per­sons, taking into account both the seve­ri­ty of the pos­si­ble harm and its pro­ba­bi­li­ty of occur­rence and they are used in a num­ber of spe­ci­fi­cal­ly pre-defi­ned are­as spe­ci­fi­ed in this Regu­la­ti­on. The iden­ti­fi­ca­ti­on of tho­se systems is based on the same metho­do­lo­gy and cri­te­ria envi­sa­ged also for any future amend­ments of the list of high-risk AI systems that the Com­mis­si­on should be empowered to adopt, via dele­ga­ted acts, to take into account the rapid pace of tech­no­lo­gi­cal deve­lo­p­ment, as well as the poten­ti­al chan­ges in the use of AI systems. 

2. In addi­ti­on to the high-risk AI systems refer­red to in para­graph 1, AI systems refer­red to in Annex III shall be con­side­red to be high-risk.

(47) AI systems could have an adver­se impact on the health and safe­ty of per­sons, in par­ti­cu­lar when such systems ope­ra­te as safe­ty com­pon­ents of pro­ducts. Con­si­stent with the objec­ti­ves of Uni­on har­mo­ni­sa­ti­on legis­la­ti­on to faci­li­ta­te the free move­ment of pro­ducts in the inter­nal mar­ket and to ensu­re that only safe and other­wi­se com­pli­ant pro­ducts find their way into the mar­ket, it is important that the safe­ty risks that may be gene­ra­ted by a pro­duct as a who­le due to its digi­tal com­pon­ents, inclu­ding AI systems, are duly pre­ven­ted and miti­ga­ted. For instance, incre­a­sing­ly auto­no­mous robots, whe­ther in the con­text of manu­fac­tu­ring or per­so­nal assi­stance and care should be able to safe­ly ope­ra­te and per­forms their func­tions in com­plex envi­ron­ments. Simi­lar­ly, in the health sec­tor whe­re the sta­kes for life and health are par­ti­cu­lar­ly high, incre­a­sing­ly sophi­sti­ca­ted dia­gno­stics systems and systems sup­port­ing human decis­i­ons should be relia­ble and accurate. 

(48) The ext­ent of the adver­se impact cau­sed by the AI system on the fun­da­men­tal rights pro­tec­ted by the Char­ter is of par­ti­cu­lar rele­van­ce when clas­si­fy­ing an AI system as high risk. Tho­se rights include the right to human dignity, respect for pri­va­te and fami­ly life, pro­tec­tion of per­so­nal data, free­dom of expres­si­on and infor­ma­ti­on, free­dom of assem­bly and of asso­cia­ti­on, the right to non-dis­cri­mi­na­ti­on, the right to edu­ca­ti­on, con­su­mer pro­tec­tion, workers’ rights, the rights of per­sons with disa­bi­li­ties, gen­der equa­li­ty, intellec­tu­al pro­per­ty rights, the right to an effec­ti­ve reme­dy and to a fair tri­al, the right of defence and the pre­sump­ti­on of inno­cence, and the right to good admi­ni­stra­ti­on. In addi­ti­on to tho­se rights, it is important to high­light the fact that child­ren have spe­ci­fic rights as enshri­ned in Artic­le 24 of the Char­ter and in the United Nati­ons Con­ven­ti­on on the Rights of the Child, fur­ther deve­lo­ped in the UNCRC Gene­ral Com­ment No 25 as regards the digi­tal envi­ron­ment, both of which requi­re con­side­ra­ti­on of the children’s vul­nerabi­li­ties and pro­vi­si­on of such pro­tec­tion and care as neces­sa­ry for their well-being. The fun­da­men­tal right to a high level of envi­ron­men­tal pro­tec­tion enshri­ned in the Char­ter and imple­men­ted in Uni­on poli­ci­es should also be con­side­red when asses­sing the seve­ri­ty of the harm that an AI system can cau­se, inclu­ding in rela­ti­on to the health and safe­ty of persons. 

3. By dero­ga­ti­on from para­graph 2, an AI system refer­red to in Annex III shall not be con­side­red to be high-risk whe­re it does not pose a signi­fi­cant risk of harm to the health, safe­ty or fun­da­men­tal rights of natu­ral per­sons, inclu­ding by not mate­ri­al­ly influen­cing the out­co­me of decis­i­on making.

The first sub­pa­ra­graph shall app­ly whe­re any of the fol­lo­wing con­di­ti­ons is fulfilled:

(a) the AI system is inten­ded to per­form a nar­row pro­ce­du­ral task;

(b) the AI system is inten­ded to impro­ve the result of a pre­vious­ly com­ple­ted human activity;

(c) the AI system is inten­ded to detect decis­i­on-making pat­terns or devia­ti­ons from pri­or decis­i­on-making pat­terns and is not meant to replace or influence the pre­vious­ly com­ple­ted human assess­ment, wit­hout pro­per human review; or

(d) the AI system is inten­ded to per­form a pre­pa­ra­to­ry task to an assess­ment rele­vant for the pur­po­ses of the use cases listed in Annex III.

Not­wi­th­stan­ding the first sub­pa­ra­graph, an AI system refer­red to in Annex III shall always be con­side­red to be high-risk whe­re the AI system per­forms pro­fil­ing of natu­ral persons.

(53) It is also important to cla­ri­fy that the­re may be spe­ci­fic cases in which AI systems refer­red to in pre-defi­ned are­as spe­ci­fi­ed in this Regu­la­ti­on do not lead to a signi­fi­cant risk of harm to the legal inte­rests pro­tec­ted under tho­se are­as becau­se they do not mate­ri­al­ly influence the decis­i­on-making or do not harm tho­se inte­rests sub­stan­ti­al­ly. For the pur­po­ses of this Regu­la­ti­on, an AI system that does not mate­ri­al­ly influence the out­co­me of decis­i­on-making should be under­s­tood to be an AI system that does not have an impact on the sub­stance, and ther­eby the out­co­me, of decis­i­on-making, whe­ther human or auto­ma­ted. An AI system that does not mate­ri­al­ly influence the out­co­me of decis­i­on-making could include situa­tions in which one or more of the fol­lo­wing con­di­ti­ons are ful­fil­led. The first such con­di­ti­on should be that the AI system is inten­ded to per­form a nar­row pro­ce­du­ral task, such as an AI system that trans­forms uns­truc­tu­red data into struc­tu­red data, an AI system that clas­si­fi­es inco­ming docu­ments into cate­go­ries or an AI system that is used to detect dupli­ca­tes among a lar­ge num­ber of appli­ca­ti­ons. Tho­se tasks are of such nar­row and limi­t­ed natu­re that they pose only limi­t­ed risks which are not increa­sed through the use of an AI system in a con­text that is listed as a high-risk use in an annex to this Regu­la­ti­on. The second con­di­ti­on should be that the task per­for­med by the AI system is inten­ded to impro­ve the result of a pre­vious­ly com­ple­ted human acti­vi­ty that may be rele­vant for the pur­po­ses of the high-risk uses listed in an annex to this Regu­la­ti­on. Con­side­ring tho­se cha­rac­te­ri­stics, the AI system pro­vi­des only an addi­tio­nal lay­er to a human acti­vi­ty with con­se­quent­ly lowe­red risk. That con­di­ti­on would, for exam­p­le, app­ly to AI systems that are inten­ded to impro­ve the lan­guage used in pre­vious­ly draf­ted docu­ments, for exam­p­le in rela­ti­on to pro­fes­sio­nal tone, aca­de­mic style of lan­guage or by alig­ning text to a cer­tain brand messaging.

The third con­di­ti­on should be that the AI system is inten­ded to detect decis­i­on-making pat­terns or devia­ti­ons from pri­or decis­i­on-making pat­terns. The risk would be lowe­red becau­se the use of the AI system fol­lows a pre­vious­ly com­ple­ted human assess­ment which it is not meant to replace or influence, wit­hout pro­per human review. Such AI systems include for instance tho­se that, given a cer­tain gra­ding pat­tern of a tea­cher, can be used to check ex post whe­ther the tea­cher may have devia­ted from the gra­ding pat­tern so as to flag poten­ti­al incon­si­sten­ci­es or anoma­lies. The fourth con­di­ti­on should be that the AI system is inten­ded to per­form a task that is only pre­pa­ra­to­ry to an assess­ment rele­vant for the pur­po­ses of the AI systems listed in an annex to this Regu­la­ti­on, thus making the pos­si­ble impact of the out­put of the system very low in terms of repre­sen­ting a risk for the assess­ment to fol­low. That con­di­ti­on covers, inter alia, smart solu­ti­ons for file hand­ling, which include various func­tions from index­ing, sear­ching, text and speech pro­ce­s­sing or lin­king data to other data sources, or AI systems used for trans­la­ti­on of initi­al docu­ments. In any case, AI systems used in high-risk use-cases listed in an annex to this Regu­la­ti­on should be con­side­red to pose signi­fi­cant risks of harm to the health, safe­ty or fun­da­men­tal rights if the AI system implies pro­fil­ing within the mea­ning of Artic­le 4, point (4) of Regu­la­ti­on (EU) 2016/679 or Artic­le 3, point (4) of Direc­ti­ve (EU) 2016/680 or Artic­le 3, point (5) of Regu­la­ti­on (EU) 2018/1725. To ensu­re tracea­bi­li­ty and trans­pa­ren­cy, a pro­vi­der who con­siders that an AI system is not high-risk on the basis of the con­di­ti­ons refer­red to abo­ve should draw up docu­men­ta­ti­on of the assess­ment befo­re that system is pla­ced on the mar­ket or put into ser­vice and should pro­vi­de that docu­men­ta­ti­on to natio­nal com­pe­tent aut­ho­ri­ties upon request. Such a pro­vi­der should be obli­ged to regi­ster the AI system in the EU data­ba­se estab­lished under this Regu­la­ti­on. With a view to pro­vi­ding fur­ther gui­dance for the prac­ti­cal imple­men­ta­ti­on of the con­di­ti­ons under which the AI systems listed in an annex to this Regu­la­ti­on are, on an excep­tio­nal basis, non-high-risk, the Com­mis­si­on should, after con­sul­ting the Board, pro­vi­de gui­de­lines spe­ci­fy­ing that prac­ti­cal imple­men­ta­ti­on, com­ple­ted by a com­pre­hen­si­ve list of prac­ti­cal examp­les of use cases of AI systems that are high-risk and use cases that are not.

4. A pro­vi­der who con­siders that an AI system refer­red to in Annex III is not high-risk shall docu­ment its assess­ment befo­re that system is pla­ced on the mar­ket or put into ser­vice. Such pro­vi­der shall be sub­ject to the regi­stra­ti­on obli­ga­ti­on set out in Artic­le 49(2). Upon request of natio­nal com­pe­tent aut­ho­ri­ties, the pro­vi­der shall pro­vi­de the docu­men­ta­ti­on of the assessment.

5. The Com­mis­si­on shall, after con­sul­ting the Euro­pean Arti­fi­ci­al Intel­li­gence Board (the ‘Board’), and no later than … [18 months from the date of ent­ry into force of this Regu­la­ti­on], pro­vi­de gui­de­lines spe­ci­fy­ing the prac­ti­cal imple­men­ta­ti­on of this Artic­le in line with Artic­le 96 tog­e­ther with a com­pre­hen­si­ve list of prac­ti­cal examp­les of use cases of AI systems that are high-risk and not high-risk.

6. The Com­mis­si­on is empowered to adopt dele­ga­ted acts in accordance with Artic­le 97 in order to amend para­graph 3, second sub­pa­ra­graph, of this Artic­le by adding new con­di­ti­ons to tho­se laid down the­r­ein, or by modi­fy­ing them, whe­re the­re is con­cre­te and relia­ble evi­dence of the exi­stence of AI systems that fall under the scope of Annex III, but do not pose a signi­fi­cant risk of harm to the health, safe­ty or fun­da­men­tal rights of natu­ral persons.

7. The Com­mis­si­on shall adopt dele­ga­ted acts in accordance with Artic­le 97 in order to amend para­graph 3, second sub­pa­ra­graph, of this Artic­le by dele­ting any of the con­di­ti­ons laid down the­r­ein, whe­re the­re is con­cre­te and relia­ble evi­dence that this is neces­sa­ry to main­tain the level of pro­tec­tion of health, safe­ty and fun­da­men­tal rights pro­vi­ded for by this Regulation.

8. Any amend­ment to the con­di­ti­ons laid down in para­graph 3, second sub­pa­ra­graph, adopted in accordance with para­graphs 6 and 7 of this Artic­le shall not decrea­se the over­all level of pro­tec­tion of health, safe­ty and fun­da­men­tal rights pro­vi­ded for by this Regu­la­ti­on and shall ensu­re con­si­sten­cy with the dele­ga­ted acts adopted pur­su­ant to Artic­le 7(1), and take account of mar­ket and tech­no­lo­gi­cal developments.

(63) The fact that an AI system is clas­si­fi­ed as a high-risk AI system under this Regu­la­ti­on should not be inter­pre­ted as indi­ca­ting that the use of the system is lawful under other acts of Uni­on law or under natio­nal law com­pa­ti­ble with Uni­on law, such as on the pro­tec­tion of per­so­nal data, on the use of poly­graphs and simi­lar tools or other systems to detect the emo­tio­nal sta­te of natu­ral per­sons. Any such use should con­ti­n­ue to occur sole­ly in accordance with the appli­ca­ble requi­re­ments resul­ting from the Char­ter and from the appli­ca­ble acts of secon­da­ry Uni­on law and natio­nal law. This Regu­la­ti­on should not be under­s­tood as pro­vi­ding for the legal ground for pro­ce­s­sing of per­so­nal data, inclu­ding spe­cial cate­go­ries of per­so­nal data, whe­re rele­vant, unless it is spe­ci­fi­cal­ly other­wi­se pro­vi­ded for in this Regulation.

Artic­le 7 Amend­ments to Annex III

1. The Com­mis­si­on is empowered to adopt dele­ga­ted acts in accordance with Artic­le 97 to amend Annex III by adding or modi­fy­ing use-cases of high-risk AI systems whe­re both of the fol­lo­wing con­di­ti­ons are fulfilled:

(a) the AI systems are inten­ded to be used in any of the are­as listed in Annex III;

(b) the AI systems pose a risk of harm to health and safe­ty, or an adver­se impact

on fun­da­men­tal rights, and that risk is equi­va­lent to, or grea­ter than, the risk of harm or of adver­se impact posed by the high-risk AI systems alre­a­dy refer­red to in Annex III.

2. When asses­sing the con­di­ti­on under para­graph 1, point (b), the Com­mis­si­on shall take into account the fol­lo­wing criteria:

(a) the inten­ded pur­po­se of the AI system;

(b) the ext­ent to which an AI system has been used or is likely to be used;

(c) the natu­re and amount of the data pro­ce­s­sed and used by the AI system, in par­ti­cu­lar whe­ther spe­cial cate­go­ries of per­so­nal data are processed;

(d) the ext­ent to which the AI system acts auto­no­mously and the pos­si­bi­li­ty for a human to over­ri­de a decis­i­on or recom­men­da­ti­ons that may lead to poten­ti­al harm; 

(e) the ext­ent to which the use of an AI system has alre­a­dy cau­sed harm to health and safe­ty, has had an adver­se impact on fun­da­men­tal rights or has given rise to signi­fi­cant con­cerns in rela­ti­on to the likeli­hood of such harm or adver­se impact, as demon­stra­ted, for exam­p­le, by reports or docu­men­ted alle­ga­ti­ons sub­mit­ted to natio­nal com­pe­tent aut­ho­ri­ties or by other reports, as appropriate;

(f) the poten­ti­al ext­ent of such harm or such adver­se impact, in par­ti­cu­lar in terms of its inten­si­ty and its abili­ty to affect mul­ti­ple per­sons or to dis­pro­por­tio­na­te­ly affect a par­ti­cu­lar group of persons;

(g) the ext­ent to which per­sons who are poten­ti­al­ly har­med or suf­fer an adver­se impact are depen­dent on the out­co­me pro­du­ced with an AI system, in par­ti­cu­lar becau­se for prac­ti­cal or legal rea­sons it is not rea­son­ab­ly pos­si­ble to opt-out from that outcome;

(h) the ext­ent to which the­re is an imba­lan­ce of power, or the per­sons who are poten­ti­al­ly har­med or suf­fer an adver­se impact are in a vul­nerable posi­ti­on in rela­ti­on to the deployer of an AI system, in par­ti­cu­lar due to sta­tus, aut­ho­ri­ty, know­ledge, eco­no­mic or social cir­cum­stances, or age; 

(i) the ext­ent to which the out­co­me pro­du­ced invol­ving an AI system is easi­ly cor­ri­gi­ble or rever­si­ble, taking into account the tech­ni­cal solu­ti­ons available to cor­rect or rever­se it, wher­eby out­co­mes having an adver­se impact on health, safe­ty or fun­da­men­tal rights, shall not be con­side­red to be easi­ly cor­ri­gi­ble or reversible;

(j) the magnitu­de and likeli­hood of bene­fit of the deployment of the AI system for indi­vi­du­als, groups, or socie­ty at lar­ge, inclu­ding pos­si­ble impro­ve­ments in pro­duct safety;

(k) the ext­ent to which exi­sting Uni­on law pro­vi­des for:

(i) effec­ti­ve mea­su­res of redress in rela­ti­on to the risks posed by an AI system, with the exclu­si­on of claims for damages;

(ii) effec­ti­ve mea­su­res to pre­vent or sub­stan­ti­al­ly mini­mi­se tho­se risks. 

3. The Com­mis­si­on is empowered to adopt dele­ga­ted acts in accordance with Artic­le 97 to amend the list in Annex III by remo­ving high-risk AI systems whe­re both of the fol­lo­wing con­di­ti­ons are fulfilled:

(a) the high-risk AI system con­cer­ned no lon­ger poses any signi­fi­cant risks to fun­da­men­tal rights, health or safe­ty, taking into account the cri­te­ria listed in para­graph 2;

(b) the dele­ti­on does not decrea­se the over­all level of pro­tec­tion of health, safe­ty and fun­da­men­tal rights under Uni­on law.

Sec­tion 2 Requi­re­ments For High-Risk AI Systems

(66) Requi­re­ments should app­ly to high-risk AI systems as regards risk manage­ment, the qua­li­ty and rele­van­ce of data sets used, tech­ni­cal docu­men­ta­ti­on and record-kee­ping, trans­pa­ren­cy and the pro­vi­si­on of infor­ma­ti­on to deployers, human over­sight, and robust­ness, accu­ra­cy and cyber­se­cu­ri­ty. Tho­se requi­re­ments are neces­sa­ry to effec­tively miti­ga­te the risks for health, safe­ty and fun­da­men­tal rights. As no other less trade rest­ric­ti­ve mea­su­res are rea­son­ab­ly available tho­se requi­re­ments are not unju­sti­fi­ed rest­ric­tions to trade.

Artic­le 8 Com­pli­ance with the requirements

1. High-risk AI systems shall com­ply with the requi­re­ments laid down in this Sec­tion, taking into account their inten­ded pur­po­se as well as the gene­ral­ly ack­now­led­ged sta­te of the art on AI and AI-rela­ted tech­no­lo­gies. The risk manage­ment system refer­red to in Artic­le 9 shall be taken into account when ensu­ring com­pli­ance with tho­se requirements.

2. Whe­re a pro­duct con­ta­ins an AI system, to which the requi­re­ments of this Regu­la­ti­on as well as requi­re­ments of the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Sec­tion A of Annex I app­ly, pro­vi­ders shall be respon­si­ble for ensu­ring that their pro­duct is ful­ly com­pli­ant with all appli­ca­ble requi­re­ments under appli­ca­ble Uni­on har­mo­ni­sa­ti­on legis­la­ti­on. In ensu­ring the com­pli­ance of high-risk AI systems refer­red to in para­graph 1 with the requi­re­ments set out in this Sec­tion, and in order to ensu­re con­si­sten­cy, avo­id dupli­ca­ti­on and mini­mi­se addi­tio­nal bur­dens, pro­vi­ders shall have a choice of inte­gra­ting, as appro­pria­te, the neces­sa­ry test­ing and report­ing pro­ce­s­ses, infor­ma­ti­on and docu­men­ta­ti­on they pro­vi­de with regard to their pro­duct into docu­men­ta­ti­on and pro­ce­du­res that alre­a­dy exist and are requi­red under the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Sec­tion A of Annex I.

(64) To miti­ga­te the risks from high-risk AI systems pla­ced on the mar­ket or put into ser­vice and to ensu­re a high level of trust­wort­hi­ness, cer­tain man­da­to­ry requi­re­ments should app­ly to high-risk AI systems, taking into account the inten­ded pur­po­se and the con­text of use of the AI system and accor­ding to the risk-manage­ment system to be estab­lished by the pro­vi­der. The mea­su­res adopted by the pro­vi­ders to com­ply with the man­da­to­ry requi­re­ments of this Regu­la­ti­on should take into account the gene­ral­ly ack­now­led­ged sta­te of the art on AI, be pro­por­tio­na­te and effec­ti­ve to meet the objec­ti­ves of this Regu­la­ti­on. Based on the New Legis­la­ti­ve Frame­work, as cla­ri­fi­ed in Com­mis­si­on noti­ce “The ‘Blue Gui­de’ on the imple­men­ta­ti­on of EU pro­duct rules 2022”, the gene­ral rule is that more than one legal act of Uni­on har­mo­ni­sa­ti­on legis­la­ti­on may be appli­ca­ble to one pro­duct, sin­ce the making available or put­ting into ser­vice can take place only when the pro­duct com­plies with all appli­ca­ble Uni­on har­mo­ni­sa­ti­on legis­la­ti­on. The hazards of AI systems cover­ed by the requi­re­ments of this Regu­la­ti­on con­cern dif­fe­rent aspects than the exi­sting Uni­on har­mo­ni­sa­ti­on legis­la­ti­on and the­r­e­fo­re the requi­re­ments of this Regu­la­ti­on would com­ple­ment the exi­sting body of the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on. For exam­p­le, machi­nery or medi­cal devices pro­ducts incor­po­ra­ting an AI system might pre­sent risks not addres­sed by the essen­ti­al health and safe­ty requi­re­ments set out in the rele­vant Uni­on har­mo­ni­s­ed legis­la­ti­on, as that sec­to­ral law does not deal with risks spe­ci­fic to AI systems.

This calls for a simul­ta­neous and com­ple­men­ta­ry appli­ca­ti­on of the various legis­la­ti­ve acts. To ensu­re con­si­sten­cy and to avo­id an unneces­sa­ry admi­ni­stra­ti­ve bur­den and unneces­sa­ry costs, pro­vi­ders of a pro­duct that con­ta­ins one or more high-risk AI system, to which the requi­re­ments of this Regu­la­ti­on and of the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on based on the New Legis­la­ti­ve Frame­work and listed in an annex to this Regu­la­ti­on app­ly, should have fle­xi­bi­li­ty with regard to ope­ra­tio­nal decis­i­ons on how to ensu­re com­pli­ance of a pro­duct that con­ta­ins one or more AI systems with all the appli­ca­ble requi­re­ments of that Uni­on har­mo­ni­s­ed legis­la­ti­on in an opti­mal man­ner. That fle­xi­bi­li­ty could mean, for exam­p­le a decis­i­on by the pro­vi­der to inte­gra­te a part of the neces­sa­ry test­ing and report­ing pro­ce­s­ses, infor­ma­ti­on and docu­men­ta­ti­on requi­red under this Regu­la­ti­on into alre­a­dy exi­sting docu­men­ta­ti­on and pro­ce­du­res requi­red under exi­sting Uni­on har­mo­ni­sa­ti­on legis­la­ti­on based on the New Legis­la­ti­ve Frame­work and listed in an annex to this Regu­la­ti­on. This should not, in any way, under­mi­ne the obli­ga­ti­on of the pro­vi­der to com­ply with all the appli­ca­ble requirements. 

Artic­le 9 Risk manage­ment system

1. A risk manage­ment system shall be estab­lished, imple­men­ted, docu­men­ted and main­tai­ned in rela­ti­on to high-risk AI systems.

2. The risk manage­ment system shall be under­s­tood as a con­ti­nuous ite­ra­ti­ve pro­cess plan­ned and run throug­hout the enti­re life­cy­cle of a high-risk AI system, requi­ring regu­lar syste­ma­tic review and updating. It shall com­pri­se the fol­lo­wing steps:

(a) the iden­ti­fi­ca­ti­on and ana­ly­sis of the known and the rea­son­ab­ly fore­seeable risks that the high-risk AI system can pose to health, safe­ty or fun­da­men­tal rights when the high-risk AI system is used in accordance with its inten­ded purpose;

(b) the esti­ma­ti­on and eva­lua­ti­on of the risks that may emer­ge when the high-risk AI system is used in accordance with its inten­ded pur­po­se, and under con­di­ti­ons of rea­son­ab­ly fore­seeable misuse;

(c) the eva­lua­ti­on of other risks pos­si­bly ari­sing, based on the ana­ly­sis of data gathe­red from the post-mar­ket moni­to­ring system refer­red to in Artic­le 72;

(d) the adop­ti­on of appro­pria­te and tar­ge­ted risk manage­ment mea­su­res desi­gned to address the risks iden­ti­fi­ed pur­su­ant to point (a).

3. The risks refer­red to in this Artic­le shall con­cern only tho­se which may be rea­son­ab­ly miti­ga­ted or eli­mi­na­ted through the deve­lo­p­ment or design of the high-risk AI system, or the pro­vi­si­on of ade­qua­te tech­ni­cal information.

4. The risk manage­ment mea­su­res refer­red to in para­graph 2, point (d), shall give due con­side­ra­ti­on to the effects and pos­si­ble inter­ac­tion resul­ting from the com­bi­ned appli­ca­ti­on of the requi­re­ments set out in this Sec­tion, with a view to mini­mi­sing risks more effec­tively while achie­ving an appro­pria­te balan­ce in imple­men­ting the mea­su­res to ful­fil tho­se requirements.

5. The risk manage­ment mea­su­res refer­red to in para­graph 2, point (d), shall be such that the rele­vant resi­du­al risk asso­cia­ted with each hazard, as well as the over­all resi­du­al risk of the high-risk AI systems is jud­ged to be acceptable.

In iden­ti­fy­ing the most appro­pria­te risk manage­ment mea­su­res, the fol­lo­wing shall be ensured:

(a) eli­mi­na­ti­on or reduc­tion of risks iden­ti­fi­ed and eva­lua­ted pur­su­ant to para­graph 2 in as far as tech­ni­cal­ly fea­si­ble through ade­qua­te design and deve­lo­p­ment of the high-risk AI system;

(b) whe­re appro­pria­te, imple­men­ta­ti­on of ade­qua­te miti­ga­ti­on and con­trol mea­su­res addres­sing risks that can­not be eliminated;

(c) pro­vi­si­on of infor­ma­ti­on requi­red pur­su­ant to Artic­le 13 and, whe­re appro­pria­te, trai­ning to deployers. 

With a view to eli­mi­na­ting or redu­cing risks rela­ted to the use of the high-risk AI system, due con­side­ra­ti­on shall be given to the tech­ni­cal know­ledge, expe­ri­ence, edu­ca­ti­on, the trai­ning to be expec­ted by the deployer, and the pre­su­ma­ble con­text in which the system is inten­ded to be used.

6. High-risk AI systems shall be tested for the pur­po­se of iden­ti­fy­ing the most appro­pria­te and tar­ge­ted risk manage­ment mea­su­res. Test­ing shall ensu­re that high-risk AI systems per­form con­sist­ent­ly for their inten­ded pur­po­se and that they are in com­pli­ance with the requi­re­ments set out in this Section.

7. Test­ing pro­ce­du­res may include test­ing in real-world con­di­ti­ons in accordance with Artic­le 60.

8. The test­ing of high-risk AI systems shall be per­for­med, as appro­pria­te, at any time throug­hout the deve­lo­p­ment pro­cess, and, in any event, pri­or to their being pla­ced on the mar­ket or put into ser­vice. Test­ing shall be car­ri­ed out against pri­or defi­ned metrics and pro­ba­bi­li­stic thres­holds that are appro­pria­te to the inten­ded pur­po­se of the high-risk AI system. 

9. When imple­men­ting the risk manage­ment system as pro­vi­ded for in para­graphs 1 to 7, pro­vi­ders shall give con­side­ra­ti­on to whe­ther in view of its inten­ded pur­po­se the high-risk AI system is likely to have an adver­se impact on per­sons under the age of 18 and, as appro­pria­te, other vul­nerable groups.

10. For pro­vi­ders of high-risk AI systems that are sub­ject to requi­re­ments regar­ding inter­nal risk manage­ment pro­ce­s­ses under other rele­vant pro­vi­si­ons of Uni­on law, the aspects pro­vi­ded in para­graphs 1 to 9 may be part of, or com­bi­ned with, the risk manage­ment pro­ce­du­res estab­lished pur­su­ant to that law.

(65) The risk-manage­ment system should con­sist of a con­ti­nuous, ite­ra­ti­ve pro­cess that is plan­ned and run throug­hout the enti­re life­cy­cle of a high-risk AI system. That pro­cess should be aimed at iden­ti­fy­ing and miti­ga­ting the rele­vant risks of AI systems on health, safe­ty and fun­da­men­tal rights. The risk-manage­ment system should be regu­lar­ly review­ed and updated to ensu­re its con­ti­nuing effec­ti­ve­ness, as well as justi­fi­ca­ti­on and docu­men­ta­ti­on of any signi­fi­cant decis­i­ons and actions taken sub­ject to this Regu­la­ti­on. This pro­cess should ensu­re that the pro­vi­der iden­ti­fi­es risks or adver­se impacts and imple­ments miti­ga­ti­on mea­su­res for the known and rea­son­ab­ly fore­seeable risks of AI systems to the health, safe­ty and fun­da­men­tal rights in light of their inten­ded pur­po­se and rea­son­ab­ly fore­seeable misu­se, inclu­ding the pos­si­ble risks ari­sing from the inter­ac­tion bet­ween the AI system and the envi­ron­ment within which it ope­ra­tes. The risk-manage­ment system should adopt the most appro­pria­te risk-manage­ment mea­su­res in light of the sta­te of the art in AI. When iden­ti­fy­ing the most appro­pria­te risk-manage­ment mea­su­res, the pro­vi­der should docu­ment and explain the choices made and, when rele­vant, invol­ve experts and exter­nal stake­hol­ders. In iden­ti­fy­ing the rea­son­ab­ly fore­seeable misu­se of high-risk AI systems, the pro­vi­der should cover uses of AI systems which, while not direct­ly cover­ed by the inten­ded pur­po­se and pro­vi­ded for in the ins­truc­tion for use may nevert­hel­ess be rea­son­ab­ly expec­ted to result from rea­di­ly pre­dic­ta­ble human beha­viour in the con­text of the spe­ci­fic cha­rac­te­ri­stics and use of a par­ti­cu­lar AI system.

Any known or fore­seeable cir­cum­stances rela­ted to the use of the high-risk AI system in accordance with its inten­ded pur­po­se or under con­di­ti­ons of rea­son­ab­ly fore­seeable misu­se, which may lead to risks to the health and safe­ty or fun­da­men­tal rights should be inclu­ded in the ins­truc­tions for use that are pro­vi­ded by the pro­vi­der. This is to ensu­re that the deployer is awa­re and takes them into account when using the high-risk AI system. Iden­ti­fy­ing and imple­men­ting risk miti­ga­ti­on mea­su­res for fore­seeable misu­se under this Regu­la­ti­on should not requi­re spe­ci­fic addi­tio­nal trai­ning for the high-risk AI system by the pro­vi­der to address fore­seeable misu­se. The pro­vi­ders howe­ver are encou­ra­ged to con­sider such addi­tio­nal trai­ning mea­su­res to miti­ga­te rea­sonable fore­seeable misu­s­es as neces­sa­ry and appropriate.

Artic­le 10 Data and data governance

1. High-risk AI systems which make use of tech­ni­ques invol­ving the trai­ning of AI models with data shall be deve­lo­ped on the basis of trai­ning, vali­da­ti­on and test­ing data sets that meet the qua­li­ty cri­te­ria refer­red to in para­graphs 2 to 5 when­ever such data sets are used.

2. Trai­ning, vali­da­ti­on and test­ing data sets shall be sub­ject to data gover­nan­ce and manage­ment prac­ti­ces appro­pria­te for the inten­ded pur­po­se of the high-risk AI system. Tho­se prac­ti­ces shall con­cern in particular:

(a) the rele­vant design choices;

(b) data coll­ec­tion pro­ce­s­ses and the ori­gin of data, and in the case of per­so­nal data, the ori­gi­nal pur­po­se of the data collection;

(c) rele­vant data-pre­pa­ra­ti­on pro­ce­s­sing ope­ra­ti­ons, such as anno­ta­ti­on, label­ling, clea­ning, updating, enrich­ment and aggregation;

(d) the for­mu­la­ti­on of assump­ti­ons, in par­ti­cu­lar with respect to the infor­ma­ti­on that the data are sup­po­sed to mea­su­re and represent;

(e) an assess­ment of the avai­la­bi­li­ty, quan­ti­ty and sui­ta­bi­li­ty of the data sets that are needed;

(f) exami­na­ti­on in view of pos­si­ble bia­ses that are likely to affect the health and safe­ty of per­sons, have a nega­ti­ve impact on fun­da­men­tal rights or lead to dis­cri­mi­na­ti­on pro­hi­bi­ted under Uni­on law, espe­ci­al­ly whe­re data out­puts influence inputs for future operations;

(g) appro­pria­te mea­su­res to detect, pre­vent and miti­ga­te pos­si­ble bia­ses iden­ti­fi­ed accor­ding to point (f);

(h) the iden­ti­fi­ca­ti­on of rele­vant data gaps or short­co­mings that pre­vent com­pli­ance with this Regu­la­ti­on, and how tho­se gaps and short­co­mings can be addressed. 

3. Trai­ning, vali­da­ti­on and test­ing data sets shall be rele­vant, suf­fi­ci­ent­ly repre­sen­ta­ti­ve, and to the best ext­ent pos­si­ble, free of errors and com­ple­te in view of the inten­ded pur­po­se. They shall have the appro­pria­te sta­tis­ti­cal pro­per­ties, inclu­ding, whe­re appli­ca­ble, as regards the per­sons or groups of per­sons in rela­ti­on to whom the high-risk AI system is inten­ded to be used. Tho­se cha­rac­te­ri­stics of the data sets may be met at the level of indi­vi­du­al data sets or at the level of a com­bi­na­ti­on thereof.

(68) For the deve­lo­p­ment and assess­ment of high-risk AI systems, cer­tain actors, such as pro­vi­ders, noti­fi­ed bodies and other rele­vant enti­ties, such as Euro­pean Digi­tal Inno­va­ti­on Hubs, test­ing expe­ri­men­ta­ti­on faci­li­ties and rese­ar­chers, should be able to access and use high-qua­li­ty data sets within the fields of acti­vi­ties of tho­se actors which are rela­ted to this Regu­la­ti­on. Euro­pean com­mon data spaces estab­lished by the Com­mis­si­on and the faci­li­ta­ti­on of data sha­ring bet­ween busi­nesses and with govern­ment in the public inte­rest will be instru­men­tal to pro­vi­de trustful, accoun­ta­ble and non-dis­cri­mi­na­to­ry access to high-qua­li­ty data for the trai­ning, vali­da­ti­on and test­ing of AI systems. For exam­p­le, in health, the Euro­pean health data space will faci­li­ta­te non-dis­cri­mi­na­to­ry access to health data and the trai­ning of AI algo­rith­ms on tho­se data sets, in a pri­va­cy-pre­ser­ving, secu­re, time­ly, trans­pa­rent and trust­wor­t­hy man­ner, and with an appro­pria­te insti­tu­tio­nal gover­nan­ce. Rele­vant com­pe­tent aut­ho­ri­ties, inclu­ding sec­to­ral ones, pro­vi­ding or sup­port­ing the access to data may also sup­port the pro­vi­si­on of high-qua­li­ty data for the trai­ning, vali­da­ti­on and test­ing of AI systems.

4. Data sets shall take into account, to the ext­ent requi­red by the inten­ded pur­po­se, the cha­rac­te­ri­stics or ele­ments that are par­ti­cu­lar to the spe­ci­fic geo­gra­phi­cal, con­tex­tu­al, beha­viou­ral or func­tion­al set­ting within which the high-risk AI system is inten­ded to be used.

5. To the ext­ent that it is strict­ly neces­sa­ry for the pur­po­se of ensu­ring bias detec­tion and cor­rec­tion in rela­ti­on to the high-risk AI systems in accordance with para­graph (2), points (f) and (g) of this Artic­le, the pro­vi­ders of such systems may excep­tio­nal­ly pro­cess spe­cial cate­go­ries of per­so­nal data, sub­ject to appro­pria­te safe­guards for the fun­da­men­tal rights and free­doms of natu­ral per­sons. In addi­ti­on to the pro­vi­si­ons set out in Regu­la­ti­ons (EU) 2016/679 and (EU) 2018/1725 and Direc­ti­ve (EU) 2016/680, all the fol­lo­wing con­di­ti­ons must be met in order for such pro­ce­s­sing to occur:

(a) the bias detec­tion and cor­rec­tion can­not be effec­tively ful­fil­led by pro­ce­s­sing other data, inclu­ding syn­the­tic or anony­mi­sed data;

(b) the spe­cial cate­go­ries of per­so­nal data are sub­ject to tech­ni­cal limi­ta­ti­ons on the re-use of the per­so­nal data, and sta­te-of-the-art secu­ri­ty and pri­va­cy-pre­ser­ving mea­su­res, inclu­ding pseudonymisation;

(c) the spe­cial cate­go­ries of per­so­nal data are sub­ject to mea­su­res to ensu­re that the per­so­nal data pro­ce­s­sed are secu­red, pro­tec­ted, sub­ject to sui­ta­ble safe­guards, inclu­ding strict con­trols and docu­men­ta­ti­on of the access, to avo­id misu­se and ensu­re that only aut­ho­ri­sed per­sons have access to tho­se per­so­nal data with appro­pria­te con­fi­den­tia­li­ty obligations;

(d) the spe­cial cate­go­ries of per­so­nal data are not to be trans­mit­ted, trans­fer­red or other­wi­se acce­s­sed by other parties;

(e) the spe­cial cate­go­ries of per­so­nal data are dele­ted once the bias has been cor­rec­ted or the per­so­nal data has rea­ched the end of its reten­ti­on peri­od, whi­che­ver comes first;

(f) the records of pro­ce­s­sing acti­vi­ties pur­su­ant to Regu­la­ti­ons (EU) 2016/679 and (EU) 2018/1725 and Direc­ti­ve (EU) 2016/680 include the rea­sons why the pro­ce­s­sing of spe­cial cate­go­ries of per­so­nal data was strict­ly neces­sa­ry to detect and cor­rect bia­ses, and why that objec­ti­ve could not be achie­ved by pro­ce­s­sing other data.

(69) The right to pri­va­cy and to pro­tec­tion of per­so­nal data must be gua­ran­teed throug­hout the enti­re life­cy­cle of the AI system. In this regard, the prin­ci­ples of data mini­mi­sa­ti­on and data pro­tec­tion by design and by default, as set out in Uni­on data pro­tec­tion law, are appli­ca­ble when per­so­nal data are pro­ce­s­sed. Mea­su­res taken by pro­vi­ders to ensu­re com­pli­ance with tho­se prin­ci­ples may include not only anony­mi­sa­ti­on and encryp­ti­on, but also the use of tech­no­lo­gy that per­mits algo­rith­ms to be brought to the data and allo­ws trai­ning of AI systems wit­hout the trans­mis­si­on bet­ween par­ties or copy­ing of the raw or struc­tu­red data them­sel­ves, wit­hout pre­ju­di­ce to the requi­re­ments on data gover­nan­ce pro­vi­ded for in this Regulation.

(70) In order to pro­tect the right of others from the dis­cri­mi­na­ti­on that might result from the bias in AI systems, the pro­vi­ders should, excep­tio­nal­ly, to the ext­ent that it is strict­ly neces­sa­ry for the pur­po­se of ensu­ring bias detec­tion and cor­rec­tion in rela­ti­on to the high-risk AI systems, sub­ject to appro­pria­te safe­guards for the fun­da­men­tal rights and free­doms of natu­ral per­sons and fol­lo­wing the appli­ca­ti­on of all appli­ca­ble con­di­ti­ons laid down under this Regu­la­ti­on in addi­ti­on to the con­di­ti­ons laid down in Regu­la­ti­ons (EU) 2016/679 and (EU) 2018/1725 and Direc­ti­ve (EU) 2016/680, be able to pro­cess also spe­cial cate­go­ries of per­so­nal data, as a mat­ter of sub­stan­ti­al public inte­rest within the mea­ning of Artic­le 9(2), point (g) of Regu­la­ti­on (EU) 2016/679 and Artic­le 10(2), point (g) of Regu­la­ti­on (EU) 2018/1725.

6. For the deve­lo­p­ment of high-risk AI systems not using tech­ni­ques invol­ving the trai­ning of AI models, para­graphs 2 to 5 app­ly only to the test­ing data sets.

(67) High-qua­li­ty data and access to high-qua­li­ty data plays a vital role in pro­vi­ding struc­tu­re and in ensu­ring the per­for­mance of many AI systems, espe­ci­al­ly when tech­ni­ques invol­ving the trai­ning of models are used, with a view to ensu­re that the high-risk AI system per­forms as inten­ded and safe­ly and it does not beco­me a source of dis­cri­mi­na­ti­on pro­hi­bi­ted by Uni­on law. High-qua­li­ty data sets for trai­ning, vali­da­ti­on and test­ing requi­re the imple­men­ta­ti­on of appro­pria­te data gover­nan­ce and manage­ment prac­ti­ces. Data sets for trai­ning, vali­da­ti­on and test­ing, inclu­ding the labels, should be rele­vant, suf­fi­ci­ent­ly repre­sen­ta­ti­ve, and to the best ext­ent pos­si­ble free of errors and com­ple­te in view of the inten­ded pur­po­se of the system. In order to faci­li­ta­te com­pli­ance with Uni­on data pro­tec­tion law, such as Regu­la­ti­on (EU) 2016/679, data gover­nan­ce and manage­ment prac­ti­ces should include, in the case of per­so­nal data, trans­pa­ren­cy about the ori­gi­nal pur­po­se of the data coll­ec­tion. The data sets should also have the appro­pria­te sta­tis­ti­cal pro­per­ties, inclu­ding as regards the per­sons or groups of per­sons in rela­ti­on to whom the high-risk AI system is inten­ded to be used, with spe­ci­fic atten­ti­on to the miti­ga­ti­on of pos­si­ble bia­ses in the data sets, that are likely to affect the health and safe­ty of per­sons, have a nega­ti­ve impact on fun­da­men­tal rights or lead to dis­cri­mi­na­ti­on pro­hi­bi­ted under Uni­on law, espe­ci­al­ly whe­re data out­puts influence inputs for future ope­ra­ti­ons (feed­back loops). Bia­ses can for exam­p­le be inher­ent in under­ly­ing data sets, espe­ci­al­ly when histo­ri­cal data is being used, or gene­ra­ted when the systems are imple­men­ted in real world settings.

Results pro­vi­ded by AI systems could be influen­ced by such inher­ent bia­ses that are inclined to gra­du­al­ly increa­se and ther­eby per­pe­tua­te and ampli­fy exi­sting dis­cri­mi­na­ti­on, in par­ti­cu­lar for per­sons belon­ging to cer­tain vul­nerable groups, inclu­ding racial or eth­nic groups. The requi­re­ment for the data sets to be to the best ext­ent pos­si­ble com­ple­te and free of errors should not affect the use of pri­va­cy-pre­ser­ving tech­ni­ques in the con­text of the deve­lo­p­ment and test­ing of AI systems. In par­ti­cu­lar, data sets should take into account, to the ext­ent requi­red by their inten­ded pur­po­se, the fea­tures, cha­rac­te­ri­stics or ele­ments that are par­ti­cu­lar to the spe­ci­fic geo­gra­phi­cal, con­tex­tu­al, beha­viou­ral or func­tion­al set­ting which the AI system is inten­ded to be used. The requi­re­ments rela­ted to data gover­nan­ce can be com­plied with by having recour­se to third par­ties that offer cer­ti­fi­ed com­pli­ance ser­vices inclu­ding veri­fi­ca­ti­on of data gover­nan­ce, data set inte­gri­ty, and data trai­ning, vali­da­ti­on and test­ing prac­ti­ces, as far as com­pli­ance with the data requi­re­ments of this Regu­la­ti­on are ensured. 

Artic­le 11 Tech­ni­cal documentation

1. The tech­ni­cal docu­men­ta­ti­on of a high-risk AI system shall be drawn up befo­re that system is pla­ced on the mar­ket or put into ser­vice and shall be kept up-to date.

The tech­ni­cal docu­men­ta­ti­on shall be drawn up in such a way as to demon­stra­te that the high-risk AI system com­plies with the requi­re­ments set out in this Sec­tion and to pro­vi­de natio­nal com­pe­tent aut­ho­ri­ties and noti­fi­ed bodies with the neces­sa­ry infor­ma­ti­on in a clear and com­pre­hen­si­ve form to assess the com­pli­ance of the AI system with tho­se requi­re­ments. It shall con­tain, at a mini­mum, the ele­ments set out in Annex IV. SMEs, inclu­ding start-ups, may pro­vi­de the ele­ments of the tech­ni­cal docu­men­ta­ti­on spe­ci­fi­ed in Annex IV in a sim­pli­fi­ed man­ner. To that end, the Com­mis­si­on shall estab­lish a sim­pli­fi­ed tech­ni­cal docu­men­ta­ti­on form tar­ge­ted at the needs of small and microen­ter­pri­ses. Whe­re an SME, inclu­ding a start-up, opts to pro­vi­de the infor­ma­ti­on requi­red in Annex IV in a sim­pli­fi­ed man­ner, it shall use the form refer­red to in this para­graph. Noti­fi­ed bodies shall accept the form for the pur­po­ses of the con­for­mi­ty assessment.

2. Whe­re a high-risk AI system rela­ted to a pro­duct cover­ed by the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Sec­tion A of Annex I is pla­ced on the mar­ket or put into ser­vice, a sin­gle set of tech­ni­cal docu­men­ta­ti­on shall be drawn up con­tai­ning all the infor­ma­ti­on set out in para­graph 1, as well as the infor­ma­ti­on requi­red under tho­se legal acts.

3. The Com­mis­si­on is empowered to adopt dele­ga­ted acts in accordance with Artic­le 97 in order to amend Annex IV, whe­re neces­sa­ry, to ensu­re that, in light of tech­ni­cal pro­gress, the tech­ni­cal docu­men­ta­ti­on pro­vi­des all the infor­ma­ti­on neces­sa­ry to assess the com­pli­ance of the system with the requi­re­ments set out in this Section.

(71) Having com­pre­hen­si­ble infor­ma­ti­on on how high-risk AI systems have been deve­lo­ped and how they per­form throug­hout their life­time is essen­ti­al to enable tracea­bi­li­ty of tho­se systems, veri­fy com­pli­ance with the requi­re­ments under this Regu­la­ti­on, as well as moni­to­ring of their ope­ra­ti­ons and post mar­ket moni­to­ring. This requi­res kee­ping records and the avai­la­bi­li­ty of tech­ni­cal docu­men­ta­ti­on, con­tai­ning infor­ma­ti­on which is neces­sa­ry to assess the com­pli­ance of the AI system with the rele­vant requi­re­ments and faci­li­ta­te post mar­ket moni­to­ring. Such infor­ma­ti­on should include the gene­ral cha­rac­te­ri­stics, capa­bi­li­ties and limi­ta­ti­ons of the system, algo­rith­ms, data, trai­ning, test­ing and vali­da­ti­on pro­ce­s­ses used as well as docu­men­ta­ti­on on the rele­vant risk-manage­ment system and drawn in a clear and com­pre­hen­si­ve form. The tech­ni­cal docu­men­ta­ti­on should be kept up to date, appro­pria­te­ly throug­hout the life­time of the AI system. Fur­ther­mo­re, high-risk AI systems should tech­ni­cal­ly allow for the auto­ma­tic recor­ding of events, by means of logs, over the dura­ti­on of the life­time of the system.

(72) To address con­cerns rela­ted to opa­ci­ty and com­ple­xi­ty of cer­tain AI systems and help deployers to ful­fil their obli­ga­ti­ons under this Regu­la­ti­on, trans­pa­ren­cy should be requi­red for high-risk AI systems befo­re they are pla­ced on the mar­ket or put it into ser­vice. High-risk AI systems should be desi­gned in a man­ner to enable deployers to under­stand how the AI system works, eva­lua­te its func­tion­a­li­ty, and com­pre­hend its strengths and limi­ta­ti­ons. High-risk AI systems, should be accom­pa­nied by appro­pria­te infor­ma­ti­on in the form of ins­truc­tions of use. Such infor­ma­ti­on should include the cha­rac­te­ri­stics, capa­bi­li­ties and limi­ta­ti­ons of per­for­mance of the AI system. Tho­se would cover infor­ma­ti­on on pos­si­ble known and fore­seeable cir­cum­stances rela­ted to the use of the high-risk AI system, inclu­ding deployer action that may influence system beha­viour and per­for­mance, under which the AI system can lead to risks to health, safe­ty, and fun­da­men­tal rights, on the chan­ges that have been pre-deter­mi­ned and asses­sed for con­for­mi­ty by the pro­vi­der and on the rele­vant human over­sight mea­su­res, inclu­ding the mea­su­res to faci­li­ta­te the inter­pre­ta­ti­on of the out­puts of the AI system by the deployers. Trans­pa­ren­cy, inclu­ding the accom­pany­ing ins­truc­tions for use, should assist deployers in the use of the system and sup­port infor­med decis­i­on making by them. Deployers should, inter alia, be in a bet­ter posi­ti­on to make the cor­rect choice of the system that they intend to use in light of the obli­ga­ti­ons appli­ca­ble to them, be edu­ca­ted about the inten­ded and pre­clu­ded uses, and use the AI system cor­rect­ly and as appro­pria­te. In order to enhan­ce legi­bi­li­ty and acce­s­si­bi­li­ty of the infor­ma­ti­on inclu­ded in the ins­truc­tions of use, whe­re appro­pria­te, illu­stra­ti­ve examp­les, for instance on the limi­ta­ti­ons and on the inten­ded and pre­clu­ded uses of the AI system, should be inclu­ded. Pro­vi­ders should ensu­re that all docu­men­ta­ti­on, inclu­ding the ins­truc­tions for use, con­ta­ins meaningful, com­pre­hen­si­ve, acce­s­si­ble and under­stan­da­ble infor­ma­ti­on, taking into account the needs and fore­seeable know­ledge of the tar­get deployers. Ins­truc­tions for use should be made available in a lan­guage which can be easi­ly under­s­tood by tar­get deployers, as deter­mi­ned by the Mem­ber Sta­te concerned.

Artic­le 12 Record-keeping

1. High-risk AI systems shall tech­ni­cal­ly allow for the auto­ma­tic recor­ding of events (logs) over the life­time of the system. 

2. In order to ensu­re a level of tracea­bi­li­ty of the func­tio­ning of a high-risk AI system that is appro­pria­te to the inten­ded pur­po­se of the system, log­ging capa­bi­li­ties shall enable the recor­ding of events rele­vant for:

(a) iden­ti­fy­ing situa­tions that may result in the high-risk AI system pre­sen­ting a risk within the mea­ning of Artic­le 79(1) or in a sub­stan­ti­al modification;

(b) faci­li­ta­ting the post-mar­ket moni­to­ring refer­red to in Artic­le 72; and

(c) moni­to­ring the ope­ra­ti­on of high-risk AI systems refer­red to in Artic­le 26(5).

3. For high-risk AI systems refer­red to in point 1 (a), of Annex III, the log­ging capa­bi­li­ties shall pro­vi­de, at a minimum:

(a) recor­ding of the peri­od of each use of the system (start date and time and end date and time of each use);

(b) the refe­rence data­ba­se against which input data has been checked by the system; 

(c) the input data for which the search has led to a match;

(d) the iden­ti­fi­ca­ti­on of the natu­ral per­sons invol­ved in the veri­fi­ca­ti­on of the results, as refer­red to in Artic­le 14(5).

Artic­le 13 Trans­pa­ren­cy and pro­vi­si­on of infor­ma­ti­on to deployers

1. High-risk AI systems shall be desi­gned and deve­lo­ped in such a way as to ensu­re that their ope­ra­ti­on is suf­fi­ci­ent­ly trans­pa­rent to enable deployers to inter­pret a system’s out­put and use it appro­pria­te­ly. An appro­pria­te type and degree of trans­pa­ren­cy shall be ensu­red with a view to achie­ving com­pli­ance with the rele­vant obli­ga­ti­ons of the pro­vi­der and deployer set out in Sec­tion 3.

2. High-risk AI systems shall be accom­pa­nied by ins­truc­tions for use in an appro­pria­te digi­tal for­mat or other­wi­se that include con­cise, com­ple­te, cor­rect and clear infor­ma­ti­on that is rele­vant, acce­s­si­ble and com­pre­hen­si­ble to deployers.

3. The ins­truc­tions for use shall con­tain at least the fol­lo­wing information:

(a) the iden­ti­ty and the cont­act details of the pro­vi­der and, whe­re appli­ca­ble, of its aut­ho­ri­sed representative;

(b) the cha­rac­te­ri­stics, capa­bi­li­ties and limi­ta­ti­ons of per­for­mance of the high-risk AI system, including:

(i) its inten­ded purpose;

(ii) the level of accu­ra­cy, inclu­ding its metrics, robust­ness and cyber­se­cu­ri­ty refer­red to in Artic­le 15 against which the high-risk AI system has been tested and vali­da­ted and which can be expec­ted, and any known and fore­seeable cir­cum­stances that may have an impact on that expec­ted level of accu­ra­cy, robust­ness and cybersecurity;

(iii) any known or fore­seeable cir­cum­stance, rela­ted to the use of the high-risk AI system in accordance with its inten­ded pur­po­se or under con­di­ti­ons of rea­son­ab­ly fore­seeable misu­se, which may lead to risks to the health and safe­ty or fun­da­men­tal rights refer­red to in Artic­le 9(2);

(iv) whe­re appli­ca­ble, the tech­ni­cal capa­bi­li­ties and cha­rac­te­ri­stics of the high-risk AI system to pro­vi­de infor­ma­ti­on that is rele­vant to explain its output;

(v) when appro­pria­te, its per­for­mance regar­ding spe­ci­fic per­sons or groups of per­sons on which the system is inten­ded to be used;

(vi) when appro­pria­te, spe­ci­fi­ca­ti­ons for the input data, or any other rele­vant infor­ma­ti­on in terms of the trai­ning, vali­da­ti­on and test­ing data sets used, taking into account the inten­ded pur­po­se of the high-risk AI system;

(vii) whe­re appli­ca­ble, infor­ma­ti­on to enable deployers to inter­pret the out­put of the high-risk AI system and use it appropriately;

(c) the chan­ges to the high-risk AI system and its per­for­mance which have been pre-deter­mi­ned by the pro­vi­der at the moment of the initi­al con­for­mi­ty assess­ment, if any;

(d) the human over­sight mea­su­res refer­red to in Artic­le 14, inclu­ding the tech­ni­cal mea­su­res put in place to faci­li­ta­te the inter­pre­ta­ti­on of the out­puts of the high-risk AI systems by the deployers;

(e) the com­pu­ta­tio­nal and hard­ware resour­ces nee­ded, the expec­ted life­time of the high-risk AI system and any neces­sa­ry main­ten­an­ce and care mea­su­res, inclu­ding their fre­quen­cy, to ensu­re the pro­per func­tio­ning of that AI system, inclu­ding as regards soft­ware updates;

(f) whe­re rele­vant, a descrip­ti­on of the mecha­nisms inclu­ded within the high-risk AI system that allo­ws deployers to pro­per­ly coll­ect, store and inter­pret the logs in accordance with Artic­le 12.

(74) High-risk AI systems should per­form con­sist­ent­ly throug­hout their life­cy­cle and meet an appro­pria­te level of accu­ra­cy, robust­ness and cyber­se­cu­ri­ty, in light of their inten­ded pur­po­se and in accordance with the gene­ral­ly ack­now­led­ged sta­te of the art. The Com­mis­si­on and rele­vant orga­ni­sa­ti­ons and stake­hol­ders are encou­ra­ged to take due con­side­ra­ti­on of the miti­ga­ti­on of risks and the nega­ti­ve impacts of the AI system. The expec­ted level of per­for­mance metrics should be declared in the accom­pany­ing ins­truc­tions of use. Pro­vi­ders are urged to com­mu­ni­ca­te that infor­ma­ti­on to deployers in a clear and easi­ly under­stan­da­ble way, free of misun­derstan­dings or mis­lea­ding state­ments. Uni­on law on legal metro­lo­gy, inclu­ding Direc­ti­ves 2014/31/EU35 and 2014/32/EU36 of the Euro­pean Par­lia­ment and of the Coun­cil, aims to ensu­re the accu­ra­cy of mea­su­re­ments and to help the trans­pa­ren­cy and fair­ness of com­mer­cial tran­sac­tions. In that con­text, in coope­ra­ti­on with rele­vant stake­hol­ders and orga­ni­sa­ti­on, such as metro­lo­gy and bench­mar­king aut­ho­ri­ties, the Com­mis­si­on should encou­ra­ge, as appro­pria­te, the deve­lo­p­ment of bench­marks and mea­su­re­ment metho­do­lo­gies for AI systems. In doing so, the Com­mis­si­on should take note and col­la­bo­ra­te with inter­na­tio­nal part­ners working on metro­lo­gy and rele­vant mea­su­re­ment indi­ca­tors rela­ting to AI.

(75) Tech­ni­cal robust­ness is a key requi­re­ment for high-risk AI systems. They should be resi­li­ent in rela­ti­on to harmful or other­wi­se unde­si­ra­ble beha­viour that may result from limi­ta­ti­ons within the systems or the envi­ron­ment in which the systems ope­ra­te (e.g. errors, faults, incon­si­sten­ci­es, unex­pec­ted situa­tions). The­r­e­fo­re, tech­ni­cal and orga­ni­sa­tio­nal mea­su­res should be taken to ensu­re robust­ness of high-risk AI systems, for exam­p­le by desig­ning and deve­lo­ping appro­pria­te tech­ni­cal solu­ti­ons to pre­vent or mini­mi­ze harmful or other­wi­se unde­si­ra­ble beha­viour. Tho­se tech­ni­cal solu­ti­on may include for instance mecha­nisms enab­ling the system to safe­ly inter­rupt its ope­ra­ti­on (fail-safe plans) in the pre­sence of cer­tain anoma­lies or when ope­ra­ti­on takes place out­side cer­tain pre­de­ter­mi­ned boun­da­ries. Fail­ure to pro­tect against the­se risks could lead to safe­ty impacts or nega­tively affect the fun­da­men­tal rights, for exam­p­le due to erro­n­eous decis­i­ons or wrong or bia­sed out­puts gene­ra­ted by the AI system.

(76) Cyber­se­cu­ri­ty plays a cru­cial role in ensu­ring that AI systems are resi­li­ent against attempts to alter their use, beha­viour, per­for­mance or com­pro­mi­se their secu­ri­ty pro­per­ties by mali­cious third par­ties exploi­ting the system’s vul­nerabi­li­ties. Cyber­at­tacks against AI systems can levera­ge AI spe­ci­fic assets, such as trai­ning data sets (e.g. data poi­so­ning) or trai­ned models (e.g. adver­sa­ri­al attacks or mem­ber­ship infe­rence), or exploit vul­nerabi­li­ties in the AI system’s digi­tal assets or the under­ly­ing ICT infras­truc­tu­re. To ensu­re a level of cyber­se­cu­ri­ty appro­pria­te to the risks, sui­ta­ble mea­su­res, such as secu­ri­ty con­trols, should the­r­e­fo­re be taken by the pro­vi­ders of high-risk AI systems, also taking into account as appro­pria­te the under­ly­ing ICT infrastructure.

(27) While the risk-based approach is the basis for a pro­por­tio­na­te and effec­ti­ve set of bin­ding rules, it is important to recall the 2019 Ethics gui­de­lines for trust­wor­t­hy AI deve­lo­ped by the inde­pen­dent AI HLEG appoin­ted by the Com­mis­si­on. In tho­se gui­de­lines, the AI HLEG deve­lo­ped seven non-bin­ding ethi­cal prin­ci­ples for AI which are inten­ded to help ensu­re that AI is trust­wor­t­hy and ethi­cal­ly sound. The seven prin­ci­ples include human agen­cy and over­sight; tech­ni­cal robust­ness and safe­ty; pri­va­cy and data gover­nan­ce; trans­pa­ren­cy; diver­si­ty, non-dis­cri­mi­na­ti­on and fair­ness; socie­tal and envi­ron­men­tal well-being and accoun­ta­bi­li­ty. Wit­hout pre­ju­di­ce to the legal­ly bin­ding requi­re­ments of this Regu­la­ti­on and any other appli­ca­ble Uni­on law, tho­se gui­de­lines con­tri­bu­te to the design of coher­ent, trust­wor­t­hy and human-cen­tric AI, in line with the Char­ter and with the values on which the Uni­on is foun­ded. Accor­ding to the gui­de­lines of the AI HLEG, human agen­cy and over­sight means that AI systems are deve­lo­ped and used as a tool that ser­ves peo­p­le, respects human dignity and per­so­nal auto­no­my, and that is func­tio­ning in a way that can be appro­pria­te­ly con­trol­led and over­seen by humans.

Tech­ni­cal robust­ness and safe­ty means that AI systems are deve­lo­ped and used in a way that allo­ws robust­ness in the case of pro­blems and resi­li­ence against attempts to alter the use or per­for­mance of the AI system so as to allow unlawful use by third par­ties, and mini­mi­se unin­ten­ded harm. Pri­va­cy and data gover­nan­ce means that AI systems are deve­lo­ped and used in accordance with pri­va­cy and data pro­tec­tion rules, while pro­ce­s­sing data that meets high stan­dards in terms of qua­li­ty and integrity.

Trans­pa­ren­cy means that AI systems are deve­lo­ped and used in a way that allo­ws appro­pria­te tracea­bi­li­ty and explaina­bi­li­ty, while making humans awa­re that they com­mu­ni­ca­te or inter­act with an AI system, as well as duly informing deployers of the capa­bi­li­ties and limi­ta­ti­ons of that AI system and affec­ted per­sons about their rights. Diver­si­ty, non-dis­cri­mi­na­ti­on and fair­ness means that AI systems are deve­lo­ped and used in a way that inclu­des diver­se actors and pro­mo­tes equal access, gen­der equa­li­ty and cul­tu­ral diver­si­ty, while avo­i­ding dis­cri­mi­na­to­ry impacts and unfair bia­ses that are pro­hi­bi­ted by Uni­on or natio­nal law. Social and envi­ron­men­tal well-being means that AI systems are deve­lo­ped and used in a sus­tainable and envi­ron­men­tal­ly fri­end­ly man­ner as well as in a way to bene­fit all human beings, while moni­to­ring and asses­sing the long¬term impacts on the indi­vi­du­al, socie­ty and demo­cra­cy. The appli­ca­ti­on of tho­se prin­ci­ples should be trans­la­ted, when pos­si­ble, in the design and use of AI models. They should in any case ser­ve as a basis for the draf­ting of codes of con­duct under this Regu­la­ti­on. All stake­hol­ders, inclu­ding indu­stry, aca­de­mia, civil socie­ty and stan­dar­di­sati­on orga­ni­sa­ti­ons, are encou­ra­ged to take into account, as appro­pria­te, the ethi­cal prin­ci­ples for the deve­lo­p­ment of vol­un­t­a­ry best prac­ti­ces and standards.

Artic­le 14 Human oversight

1. High-risk AI systems shall be desi­gned and deve­lo­ped in such a way, inclu­ding with appro­pria­te human-machi­ne inter­face tools, that they can be effec­tively over­seen by natu­ral per­sons during the peri­od in which they are in use.

2. Human over­sight shall aim to pre­vent or mini­mi­se the risks to health, safe­ty or fun­da­men­tal rights that may emer­ge when a high-risk AI system is used in accordance with its inten­ded pur­po­se or under con­di­ti­ons of rea­son­ab­ly fore­seeable misu­se, in par­ti­cu­lar whe­re such risks per­sist despi­te the appli­ca­ti­on of other requi­re­ments set out in this Section.

3. The over­sight mea­su­res shall be com­men­su­ra­te with the risks, level of auto­no­my and con­text of use of the high-risk AI system, and shall be ensu­red through eit­her one or both of the fol­lo­wing types of measures:

(a) mea­su­res iden­ti­fi­ed and built, when tech­ni­cal­ly fea­si­ble, into the high-risk AI system by the pro­vi­der befo­re it is pla­ced on the mar­ket or put into service;

(b) mea­su­res iden­ti­fi­ed by the pro­vi­der befo­re pla­cing the high-risk AI system on the mar­ket or put­ting it into ser­vice and that are appro­pria­te to be imple­men­ted by the deployer.

4. For the pur­po­se of imple­men­ting para­graphs 1, 2 and 3, the high-risk AI system shall be pro­vi­ded to the deployer in such a way that natu­ral per­sons to whom human over­sight is assi­gned are enab­led, as appro­pria­te and proportionate::

(a) to pro­per­ly under­stand the rele­vant capa­ci­ties and limi­ta­ti­ons of the high-risk AI system and be able to duly moni­tor its ope­ra­ti­on, inclu­ding in view of detec­ting and addres­sing anoma­lies, dys­func­tions and unex­pec­ted performance ;

(b) to remain awa­re of the pos­si­ble ten­den­cy of auto­ma­ti­cal­ly rely­ing or over-rely­ing on the out­put pro­du­ced by a high-risk AI system (auto­ma­ti­on bias), in par­ti­cu­lar for high-risk AI systems used to pro­vi­de infor­ma­ti­on or recom­men­da­ti­ons for decis­i­ons to be taken by natu­ral persons;

(c) to cor­rect­ly inter­pret the high-risk AI system’s out­put, taking into account, for exam­p­le, the inter­pre­ta­ti­on tools and methods available;

(d) to deci­de, in any par­ti­cu­lar situa­ti­on, not to use the high-risk AI system or to other­wi­se dis­re­gard, over­ri­de or rever­se the out­put of the high-risk AI system;

(e) to inter­ve­ne in the ope­ra­ti­on of the high-risk AI system or inter­rupt the system through a ‘stop’ but­ton or a simi­lar pro­ce­du­re that allo­ws the system to come to a halt in a safe state.

For high-risk AI systems refer­red to in point 1(a) of Annex III, the mea­su­res refer­red to in para­graph 3 of this Artic­le shall be such as to ensu­re that, in addi­ti­on, no action or decis­i­on is taken by the deployer on the basis of the iden­ti­fi­ca­ti­on resul­ting from the system unless that iden­ti­fi­ca­ti­on has been sepa­ra­te­ly veri­fi­ed and con­firm­ed by at least two natu­ral per­sons with the neces­sa­ry com­pe­tence, trai­ning and authority.

The requi­re­ment for a sepa­ra­te veri­fi­ca­ti­on by at least two natu­ral per­sons shall not app­ly to high-risk AI systems used for the pur­po­ses of law enforce­ment, migra­ti­on, bor­der con­trol or asyl­um, whe­re Uni­on or natio­nal law con­siders the appli­ca­ti­on of this requi­re­ment to be disproportionate.

(73) High-risk AI systems should be desi­gned and deve­lo­ped in such a way that natu­ral per­sons can over­see their func­tio­ning, ensu­re that they are used as inten­ded and that their impacts are addres­sed over the system’s life­cy­cle. To that end, appro­pria­te human over­sight mea­su­res should be iden­ti­fi­ed by the pro­vi­der of the system befo­re its pla­cing on the mar­ket or put­ting into ser­vice. In par­ti­cu­lar, whe­re appro­pria­te, such mea­su­res should gua­ran­tee that the system is sub­ject to in-built ope­ra­tio­nal cons­traints that can­not be over­ridden by the system its­elf and is respon­si­ve to the human ope­ra­tor, and that the natu­ral per­sons to whom human over­sight has been assi­gned have the neces­sa­ry com­pe­tence, trai­ning and aut­ho­ri­ty to car­ry out that role. It is also essen­ti­al, as appro­pria­te, to ensu­re that high-risk AI systems include mecha­nisms to gui­de and inform a natu­ral per­son to whom human over­sight has been assi­gned to make infor­med decis­i­ons if, when and how to inter­ve­ne in order to avo­id nega­ti­ve con­se­quen­ces or risks, or stop the system if it does not per­form as inten­ded. Con­side­ring the signi­fi­cant con­se­quen­ces for per­sons in the case of an incor­rect match by cer­tain bio­me­tric iden­ti­fi­ca­ti­on systems, it is appro­pria­te to pro­vi­de for an enhan­ced human over­sight requi­re­ment for tho­se systems so that no action or decis­i­on may be taken by the deployer on the basis of the iden­ti­fi­ca­ti­on resul­ting from the system unless this has been sepa­ra­te­ly veri­fi­ed and con­firm­ed by at least two natu­ral per­sons. Tho­se per­sons could be from one or more enti­ties and include the per­son ope­ra­ting or using the system. This requi­re­ment should not pose unneces­sa­ry bur­den or delays and it could be suf­fi­ci­ent that the sepa­ra­te veri­fi­ca­ti­ons by the dif­fe­rent per­sons are auto­ma­ti­cal­ly recor­ded in the logs gene­ra­ted by the system. Given the spe­ci­fi­ci­ties of the are­as of law enforce­ment, migra­ti­on, bor­der con­trol and asyl­um, this requi­re­ment should not app­ly whe­re Uni­on or natio­nal law con­siders the appli­ca­ti­on of that requi­re­ment to be disproportionate. 

Artic­le 15 Accu­ra­cy, robust­ness and cybersecurity

1. High-risk AI systems shall be desi­gned and deve­lo­ped in such a way that they achie­ve an appro­pria­te level of accu­ra­cy, robust­ness, and cyber­se­cu­ri­ty, and that they per­form con­sist­ent­ly in tho­se respects throug­hout their lifecycle. 

2. To address the tech­ni­cal aspects of how to mea­su­re the appro­pria­te levels of accu­ra­cy and robust­ness set out in para­graph 1 and any other rele­vant per­for­mance metrics, the Com­mis­si­on shall, in coope­ra­ti­on with rele­vant stake­hol­ders and orga­ni­sa­ti­ons such as metro­lo­gy and bench­mar­king aut­ho­ri­ties, encou­ra­ge, as appro­pria­te, the deve­lo­p­ment of bench­marks and mea­su­re­ment methodologies.

3. The levels of accu­ra­cy and the rele­vant accu­ra­cy metrics of high-risk AI systems shall be declared in the accom­pany­ing ins­truc­tions of use.

4. High-risk AI systems shall be as resi­li­ent as pos­si­ble regar­ding errors, faults or incon­si­sten­ci­es that may occur within the system or the envi­ron­ment in which the system ope­ra­tes, in par­ti­cu­lar due to their inter­ac­tion with natu­ral per­sons or other systems. Tech­ni­cal and orga­ni­sa­tio­nal mea­su­res shall be taken in this regard.

The robust­ness of high-risk AI systems may be achie­ved through tech­ni­cal red­un­dan­cy solu­ti­ons, which may include back­up or fail-safe plans.

High-risk AI systems that con­ti­n­ue to learn after being pla­ced on the mar­ket or put into ser­vice shall be deve­lo­ped in such a way as to eli­mi­na­te or redu­ce as far as pos­si­ble the risk of pos­si­bly bia­sed out­puts influen­cing input for future ope­ra­ti­ons (feed­back loops), and as to ensu­re that any such feed­back loops are duly addres­sed with appro­pria­te miti­ga­ti­on measures.

5. High-risk AI systems shall be resi­li­ent against attempts by unaut­ho­ri­sed third par­ties to alter their use, out­puts or per­for­mance by exploi­ting system vulnerabilities.

The tech­ni­cal solu­ti­ons aiming to ensu­re the cyber­se­cu­ri­ty of high-risk AI systems shall be appro­pria­te to the rele­vant cir­cum­stances and the risks.

The tech­ni­cal solu­ti­ons to address AI spe­ci­fic vul­nerabi­li­ties shall include, whe­re appro­pria­te, mea­su­res to pre­vent, detect, respond to, resol­ve and con­trol for attacks try­ing to mani­pu­la­te the trai­ning data set (data poi­so­ning), or pre-trai­ned com­pon­ents used in trai­ning (model poi­so­ning), inputs desi­gned to cau­se the AI model to make a mista­ke (adver­sa­ri­al examp­les or model eva­si­on), con­fi­den­tia­li­ty attacks or model flaws. 

(76) Cyber­se­cu­ri­ty plays a cru­cial role in ensu­ring that AI systems are resi­li­ent against attempts to alter their use, beha­viour, per­for­mance or com­pro­mi­se their secu­ri­ty pro­per­ties by mali­cious third par­ties exploi­ting the system’s vul­nerabi­li­ties. Cyber­at­tacks against AI systems can levera­ge AI spe­ci­fic assets, such as trai­ning data sets (e.g. data poi­so­ning) or trai­ned models (e.g. adver­sa­ri­al attacks or mem­ber­ship infe­rence), or exploit vul­nerabi­li­ties in the AI system’s digi­tal assets or the under­ly­ing ICT infras­truc­tu­re. To ensu­re a level of cyber­se­cu­ri­ty appro­pria­te to the risks, sui­ta­ble mea­su­res, such as secu­ri­ty con­trols, should the­r­e­fo­re be taken by the pro­vi­ders of high-risk AI systems, also taking into account as appro­pria­te the under­ly­ing ICT infrastructure.

(77) Wit­hout pre­ju­di­ce to the requi­re­ments rela­ted to robust­ness and accu­ra­cy set out in this Regu­la­ti­on, high-risk AI systems which fall within the scope of a regu­la­ti­on of the Euro­pean Par­lia­ment and of the Coun­cil on hori­zon­tal cyber­se­cu­ri­ty requi­re­ments for pro­ducts with digi­tal ele­ments, in accordance with that regu­la­ti­on may demon­stra­te com­pli­ance with the cyber­se­cu­ri­ty requi­re­ments of this Regu­la­ti­on by ful­fil­ling the essen­ti­al cyber­se­cu­ri­ty requi­re­ments set out in that regu­la­ti­on. When high-risk AI systems ful­fil the essen­ti­al requi­re­ments of a regu­la­ti­on of the Euro­pean Par­lia­ment and of the Coun­cil on hori­zon­tal cyber­se­cu­ri­ty requi­re­ments for pro­ducts with digi­tal ele­ments, they should be dee­med com­pli­ant with the cyber­se­cu­ri­ty requi­re­ments set out in this Regu­la­ti­on in so far as the achie­ve­ment of tho­se requi­re­ments is demon­stra­ted in the EU decla­ra­ti­on of con­for­mi­ty or parts the­reof issued under that regu­la­ti­on. To that end, the assess­ment of the cyber­se­cu­ri­ty risks, asso­cia­ted to a pro­duct with digi­tal ele­ments clas­si­fi­ed as high-risk AI system accor­ding to this Regu­la­ti­on, car­ri­ed out under a regu­la­ti­on of the Euro­pean Par­lia­ment and of the Coun­cil on hori­zon­tal cyber­se­cu­ri­ty requi­re­ments for pro­ducts with digi­tal ele­ments, should con­sider risks to the cyber resi­li­ence of an AI system as regards attempts by unaut­ho­ri­sed third par­ties to alter its use, beha­viour or per­for­mance, inclu­ding AI spe­ci­fic vul­nerabi­li­ties such as data poi­so­ning or adver­sa­ri­al attacks, as well as, as rele­vant, risks to fun­da­men­tal rights as requi­red by this Regulation. 

The con­for­mi­ty assess­ment pro­ce­du­re pro­vi­ded by this Regu­la­ti­on should app­ly in rela­ti­on to the essen­ti­al cyber­se­cu­ri­ty requi­re­ments of a pro­duct with digi­tal ele­ments cover­ed by a regu­la­ti­on of the Euro­pean Par­lia­ment and of the Coun­cil on hori­zon­tal cyber­se­cu­ri­ty requi­re­ments for pro­ducts with digi­tal ele­ments and clas­si­fi­ed as a high-risk AI system under this Regu­la­ti­on. Howe­ver, this rule should not result in redu­cing the neces­sa­ry level of assu­rance for cri­ti­cal pro­ducts with digi­tal ele­ments cover­ed by a regu­la­ti­on of the Euro­pean Par­lia­ment and of the Coun­cil on hori­zon­tal cyber­se­cu­ri­ty requi­re­ments for pro­ducts with digi­tal ele­ments. The­r­e­fo­re, by way of dero­ga­ti­on from this rule, high-risk AI systems that fall within the scope of this Regu­la­ti­on and are also qua­li­fi­ed as important and cri­ti­cal pro­ducts with digi­tal ele­ments pur­su­ant to a regu­la­ti­on of the Euro­pean Par­lia­ment and of the Coun­cil on hori­zon­tal cyber­se­cu­ri­ty requi­re­ments for pro­ducts with digi­tal ele­ments and to which the con­for­mi­ty assess­ment pro­ce­du­re based on inter­nal con­trol set out in an annex to this Regu­la­ti­on applies, are sub­ject to the con­for­mi­ty assess­ment pro­vi­si­ons of a regu­la­ti­on of the Euro­pean Par­lia­ment and of the Coun­cil on hori­zon­tal cyber­se­cu­ri­ty requi­re­ments for pro­ducts with digi­tal ele­ments inso­far as the essen­ti­al cyber­se­cu­ri­ty requi­re­ments of that regu­la­ti­on are con­cer­ned. In this case, for all the other aspects cover­ed by this Regu­la­ti­on the respec­ti­ve pro­vi­si­ons on con­for­mi­ty assess­ment based on inter­nal con­trol set out in an annex to this Regu­la­ti­on should app­ly. Buil­ding on the know­ledge and exper­ti­se of ENISA on the cyber­se­cu­ri­ty poli­cy and tasks assi­gned to ENISA under the Regu­la­ti­on (EU) 2019/881 of the Euro­pean Par­lia­ment and of the Coun­cil , the Com­mis­si­on should coope­ra­te with ENISA on issues rela­ted to cyber­se­cu­ri­ty of AI systems.

Sec­tion 3 Obli­ga­ti­ons Of Pro­vi­ders And Deployers Of High-Risk AI Systems And Other Parties

Artic­le 16 Obli­ga­ti­ons of pro­vi­ders of high-risk AI systems

(79) It is appro­pria­te that a spe­ci­fic natu­ral or legal per­son, defi­ned as the pro­vi­der, takes respon­si­bi­li­ty for the pla­cing on the mar­ket or the put­ting into ser­vice of a high-risk AI system, regard­less of whe­ther that natu­ral or legal per­son is the per­son who desi­gned or deve­lo­ped the system. 

Pro­vi­ders of high-risk AI systems shall:

(a) ensu­re that their high-risk AI systems are com­pli­ant with the requi­re­ments set out in Sec­tion 2;

(b) indi­ca­te on the high-risk AI system or, whe­re that is not pos­si­ble, on its pack­a­ging or its accom­pany­ing docu­men­ta­ti­on, as appli­ca­ble, their name, regi­stered trade name or regi­stered trade mark, the address at which they can be contacted;

(c) have a qua­li­ty manage­ment system in place which com­plies with Artic­le 17;

(d) keep the docu­men­ta­ti­on refer­red to in Artic­le 18;

(e) when under their con­trol, keep the logs auto­ma­ti­cal­ly gene­ra­ted by their high-risk AI systems as refer­red to in Artic­le 19;

(f) ensu­re that the high-risk AI system under­goes the rele­vant con­for­mi­ty assess­ment pro­ce­du­re as refer­red to in Artic­le 43, pri­or to its being pla­ced on the mar­ket or put into service;

(g) draw up an EU decla­ra­ti­on of con­for­mi­ty in accordance with Artic­le 47;

(h) affix the CE mar­king to the high-risk AI system or, whe­re that is not pos­si­ble, on its pack­a­ging or its accom­pany­ing docu­men­ta­ti­on, to indi­ca­te con­for­mi­ty with this Regu­la­ti­on, in accordance with Artic­le 48;

(i) com­ply with the regi­stra­ti­on obli­ga­ti­ons refer­red to in Artic­le 49(1);

(j) take the neces­sa­ry cor­rec­ti­ve actions and pro­vi­de infor­ma­ti­on as requi­red in Artic­le 20;

(k) upon a rea­so­ned request of a natio­nal com­pe­tent aut­ho­ri­ty, demon­stra­te the con­for­mi­ty of the high-risk AI system with the requi­re­ments set out in Sec­tion 2;

(l) ensu­re that the high-risk AI system com­plies with acce­s­si­bi­li­ty requi­re­ments in accordance with Direc­ti­ves (EU) 2016/2102 and (EU) 2019/882.

(80) As signa­to­ries to the United Nati­ons Con­ven­ti­on on the Rights of Per­sons with Disa­bi­li­ties, the Uni­on and the Mem­ber Sta­tes are legal­ly obli­ged to pro­tect per­sons with disa­bi­li­ties from dis­cri­mi­na­ti­on and pro­mo­te their equa­li­ty, to ensu­re that per­sons with disa­bi­li­ties have access, on an equal basis with others, to infor­ma­ti­on and com­mu­ni­ca­ti­ons tech­no­lo­gies and systems, and to ensu­re respect for pri­va­cy for per­sons with disa­bi­li­ties. Given the gro­wing importance and use of AI systems, the appli­ca­ti­on of uni­ver­sal design prin­ci­ples to all new tech­no­lo­gies and ser­vices should ensu­re full and equal access for ever­yo­ne poten­ti­al­ly affec­ted by or using AI tech­no­lo­gies, inclu­ding per­sons with disa­bi­li­ties, in a way that takes full account of their inher­ent dignity and diver­si­ty. It is the­r­e­fo­re essen­ti­al that pro­vi­ders ensu­re full com­pli­ance with acce­s­si­bi­li­ty requi­re­ments, inclu­ding Direc­ti­ve (EU) 2016/2102 of the Euro­pean Par­lia­ment and of the Coun­cil and Direc­ti­ve (EU) 2019/882. Pro­vi­ders should ensu­re com­pli­ance with the­se requi­re­ments by design. The­r­e­fo­re, the neces­sa­ry mea­su­res should be inte­gra­ted as much as pos­si­ble into the design of the high-risk AI system.

Artic­le 17 Qua­li­ty manage­ment system

1. Pro­vi­ders of high-risk AI systems shall put a qua­li­ty manage­ment system in place that ensu­res com­pli­ance with this Regu­la­ti­on. That system shall be docu­men­ted in a syste­ma­tic and order­ly man­ner in the form of writ­ten poli­ci­es, pro­ce­du­res and ins­truc­tions, and shall include at least the fol­lo­wing aspects:

(a) a stra­tegy for regu­la­to­ry com­pli­ance, inclu­ding com­pli­ance with con­for­mi­ty assess­ment pro­ce­du­res and pro­ce­du­res for the manage­ment of modi­fi­ca­ti­ons to the high-risk AI system;

(b) tech­ni­ques, pro­ce­du­res and syste­ma­tic actions to be used for the design, design con­trol and design veri­fi­ca­ti­on of the high-risk AI system;

(c) tech­ni­ques, pro­ce­du­res and syste­ma­tic actions to be used for the deve­lo­p­ment, qua­li­ty con­trol and qua­li­ty assu­rance of the high-risk AI system;

(d) exami­na­ti­on, test and vali­da­ti­on pro­ce­du­res to be car­ri­ed out befo­re, during and after the deve­lo­p­ment of the high-risk AI system, and the fre­quen­cy with which they have to be car­ri­ed out;

(e) tech­ni­cal spe­ci­fi­ca­ti­ons, inclu­ding stan­dards, to be applied and, whe­re the rele­vant har­mo­ni­s­ed stan­dards are not applied in full or do not cover all of the rele­vant requi­re­ments set out in Sec­tion 2, the means to be used to ensu­re that the high-risk AI system com­plies with tho­se requirements ;

(f) systems and pro­ce­du­res for data manage­ment, inclu­ding data acqui­si­ti­on, data coll­ec­tion, data ana­ly­sis, data label­ling, data sto­rage, data fil­tra­ti­on, data mining, data aggre­ga­ti­on, data reten­ti­on and any other ope­ra­ti­on regar­ding the data that is per­for­med befo­re and for the pur­po­se of the pla­cing on the mar­ket or the put­ting into ser­vice of high-risk AI systems;

(g) the risk manage­ment system refer­red to in Artic­le 9;

(h) the set­ting-up, imple­men­ta­ti­on and main­ten­an­ce of a post-mar­ket moni­to­ring system, in accordance with Artic­le 72;

(i) pro­ce­du­res rela­ted to the report­ing of a serious inci­dent in accordance with Artic­le 73; 

(j) the hand­ling of com­mu­ni­ca­ti­on with natio­nal com­pe­tent aut­ho­ri­ties, other rele­vant aut­ho­ri­ties, inclu­ding tho­se pro­vi­ding or sup­port­ing the access to data, noti­fi­ed bodies, other ope­ra­tors, cus­to­mers or other inte­re­sted parties;

(k) systems and pro­ce­du­res for record-kee­ping of all rele­vant docu­men­ta­ti­on and information;

(l) resour­ce manage­ment, inclu­ding secu­ri­ty-of-sup­p­ly rela­ted measures;

(m) an accoun­ta­bi­li­ty frame­work set­ting out the respon­si­bi­li­ties of the manage­ment and other staff with regard to all the aspects listed in this paragraph.

2. The imple­men­ta­ti­on of the aspects refer­red to in para­graph 1 shall be pro­por­tio­na­te to the size of the provider’s orga­ni­sa­ti­on. Pro­vi­ders shall, in any event, respect the degree of rigour and the level of pro­tec­tion requi­red to ensu­re the com­pli­ance of their high-risk AI systems with this Regulation.

3. Pro­vi­ders of high-risk AI systems that are sub­ject to obli­ga­ti­ons regar­ding qua­li­ty manage­ment systems or an equi­va­lent func­tion under rele­vant sec­to­ral Uni­on law may include the aspects listed in para­graph 1 as part of the qua­li­ty manage­ment systems pur­su­ant to that law.

4. For pro­vi­ders that are finan­cial insti­tu­ti­ons sub­ject to requi­re­ments regar­ding their inter­nal gover­nan­ce, arran­ge­ments or pro­ce­s­ses under Uni­on finan­cial ser­vices law, the obli­ga­ti­on to put in place a qua­li­ty manage­ment system, with the excep­ti­on of para­graph 1, points (g), (h) and (i) of this Artic­le, shall be dee­med to be ful­fil­led by com­ply­ing with the rules on inter­nal gover­nan­ce arran­ge­ments or pro­ce­s­ses pur­su­ant to the rele­vant Uni­on finan­cial ser­vices law. To that end, any har­mo­ni­s­ed stan­dards refer­red to in Artic­le 40 shall be taken into account.

(81) The pro­vi­der should estab­lish a sound qua­li­ty manage­ment system, ensu­re the accom­plish­ment of the requi­red con­for­mi­ty assess­ment pro­ce­du­re, draw up the rele­vant docu­men­ta­ti­on and estab­lish a robust post-mar­ket moni­to­ring system. Pro­vi­ders of high-risk AI systems that are sub­ject to obli­ga­ti­ons regar­ding qua­li­ty manage­ment systems under rele­vant sec­to­ral Uni­on law should have the pos­si­bi­li­ty to include the ele­ments of the qua­li­ty manage­ment system pro­vi­ded for in this Regu­la­ti­on as part of the exi­sting qua­li­ty manage­ment system pro­vi­ded for in that other sec­to­ral Uni­on law. The com­ple­men­ta­ri­ty bet­ween this Regu­la­ti­on and exi­sting sec­to­ral Uni­on law should also be taken into account in future stan­dar­di­sati­on acti­vi­ties or gui­dance adopted by the Com­mis­si­on. Public aut­ho­ri­ties which put into ser­vice high-risk AI systems for their own use may adopt and imple­ment the rules for the qua­li­ty manage­ment system as part of the qua­li­ty manage­ment system adopted at a natio­nal or regio­nal level, as appro­pria­te, taking into account the spe­ci­fi­ci­ties of the sec­tor and the com­pe­ten­ces and orga­ni­sa­ti­on of the public aut­ho­ri­ty concerned. 

Artic­le 18 Docu­men­ta­ti­on keeping

1. The pro­vi­der shall, for a peri­od ending 10 years after the high-risk AI system has been pla­ced on the mar­ket or put into ser­vice, keep at the dis­po­sal of the natio­nal com­pe­tent authorities:

(a) the tech­ni­cal docu­men­ta­ti­on refer­red to in Artic­le 11;

(b) the docu­men­ta­ti­on con­cer­ning the qua­li­ty manage­ment system refer­red to in Artic­le 17;

(c) the docu­men­ta­ti­on con­cer­ning the chan­ges appro­ved by noti­fi­ed bodies, whe­re applicable;

(d) the decis­i­ons and other docu­ments issued by the noti­fi­ed bodies, whe­re applicable;

(e) the EU decla­ra­ti­on of con­for­mi­ty refer­red to in Artic­le 47.

2. Each Mem­ber Sta­te shall deter­mi­ne con­di­ti­ons under which the docu­men­ta­ti­on refer­red to in para­graph 1 remains at the dis­po­sal of the natio­nal com­pe­tent aut­ho­ri­ties for the peri­od indi­ca­ted in that para­graph for the cases when a pro­vi­der or its aut­ho­ri­sed repre­sen­ta­ti­ve estab­lished on its ter­ri­to­ry goes bank­rupt or cea­ses its acti­vi­ty pri­or to the end of that period.

3. Pro­vi­ders that are finan­cial insti­tu­ti­ons sub­ject to requi­re­ments regar­ding their inter­nal gover­nan­ce, arran­ge­ments or pro­ce­s­ses under Uni­on finan­cial ser­vices law shall main­tain the tech­ni­cal docu­men­ta­ti­on as part of the docu­men­ta­ti­on kept under the rele­vant Uni­on finan­cial ser­vices law.

Artic­le 19 Auto­ma­ti­cal­ly gene­ra­ted logs

1. Pro­vi­ders of high-risk AI systems shall keep the logs refer­red to in Artic­le 12(1), auto­ma­ti­cal­ly gene­ra­ted by their high-risk AI systems, to the ext­ent such logs are under their con­trol. Wit­hout pre­ju­di­ce to appli­ca­ble Uni­on or natio­nal law, the logs shall be kept for a peri­od appro­pria­te to the inten­ded pur­po­se of the high-risk AI system, of at least six months, unless pro­vi­ded other­wi­se in the appli­ca­ble Uni­on or natio­nal law, in par­ti­cu­lar in Uni­on law on the pro­tec­tion of per­so­nal data.

2. Pro­vi­ders that are finan­cial insti­tu­ti­ons sub­ject to requi­re­ments regar­ding their inter­nal gover­nan­ce, arran­ge­ments or pro­ce­s­ses under Uni­on finan­cial ser­vices law shall main­tain the logs auto­ma­ti­cal­ly gene­ra­ted by their high-risk AI systems as part of the docu­men­ta­ti­on kept under the rele­vant finan­cial ser­vices law. 

Artic­le 20 Cor­rec­ti­ve actions and duty of information

1. Pro­vi­ders of high-risk AI systems which con­sider or have rea­son to con­sider that a high-risk AI system that they have pla­ced on the mar­ket or put into ser­vice is not in con­for­mi­ty with this Regu­la­ti­on shall imme­dia­te­ly take the neces­sa­ry cor­rec­ti­ve actions to bring that system into con­for­mi­ty, to with­draw it, to disable it, or to recall it, as appro­pria­te. They shall inform the dis­tri­bu­tors of the high-risk AI system con­cer­ned and, whe­re appli­ca­ble, the deployers, the aut­ho­ri­sed repre­sen­ta­ti­ve and importers accordingly.

2. Whe­re the high-risk AI system pres­ents a risk within the mea­ning of Artic­le 79(1) and the pro­vi­der beco­mes awa­re of that risk, it shall imme­dia­te­ly inve­sti­ga­te the cau­ses, in col­la­bo­ra­ti­on with the report­ing deployer, whe­re appli­ca­ble, and inform the mar­ket sur­veil­lan­ce aut­ho­ri­ties com­pe­tent for the high-risk AI system con­cer­ned and, whe­re appli­ca­ble, the noti­fi­ed body that issued a cer­ti­fi­ca­te for that high-risk AI system in accordance with Artic­le 44, in par­ti­cu­lar, of the natu­re of the non-com­pli­ance and of any rele­vant cor­rec­ti­ve action taken.

Artic­le 21 Coope­ra­ti­on with com­pe­tent authorities

1. Pro­vi­ders of high-risk AI systems shall, upon a rea­so­ned request by a com­pe­tent aut­ho­ri­ty, pro­vi­de that aut­ho­ri­ty all the infor­ma­ti­on and docu­men­ta­ti­on neces­sa­ry to demon­stra­te the con­for­mi­ty of the high-risk AI system with the requi­re­ments set out in Sec­tion 2, in a lan­guage which can be easi­ly under­s­tood by the aut­ho­ri­ty in one of the offi­ci­al lan­guages of the insti­tu­ti­ons of the Uni­on as indi­ca­ted by the Mem­ber Sta­te concerned.

2. Upon a rea­so­ned request by a com­pe­tent aut­ho­ri­ty, pro­vi­ders shall also give the reque­st­ing com­pe­tent aut­ho­ri­ty, as appli­ca­ble, access to the auto­ma­ti­cal­ly gene­ra­ted logs of the high-risk AI system refer­red to in Artic­le 12(1), to the ext­ent such logs are under their control.

3. Any infor­ma­ti­on obtai­ned by a com­pe­tent aut­ho­ri­ty pur­su­ant to this Artic­le shall be trea­ted in accordance with the con­fi­den­tia­li­ty obli­ga­ti­ons set out in Artic­le 78. 

Artic­le 22 Aut­ho­ri­sed repre­sen­ta­ti­ves of pro­vi­ders of high-risk AI systems

1. Pri­or to making their high-risk AI systems available on the Uni­on mar­ket, pro­vi­ders estab­lished in third count­ries shall, by writ­ten man­da­te, appoint an aut­ho­ri­sed repre­sen­ta­ti­ve which is estab­lished in the Union.

2. The pro­vi­der shall enable its aut­ho­ri­sed repre­sen­ta­ti­ve to per­form the tasks spe­ci­fi­ed in the man­da­te recei­ved from the provider.

3. The aut­ho­ri­sed repre­sen­ta­ti­ve shall per­form the tasks spe­ci­fi­ed in the man­da­te recei­ved from the pro­vi­der. It shall pro­vi­de a copy of the man­da­te to the mar­ket sur­veil­lan­ce aut­ho­ri­ties upon request, in one of the offi­ci­al lan­guages of the insti­tu­ti­ons of the Uni­on, as indi­ca­ted by the com­pe­tent aut­ho­ri­ty. For the pur­po­ses of this Regu­la­ti­on, the man­da­te shall empower the aut­ho­ri­sed repre­sen­ta­ti­ve to car­ry out the fol­lo­wing tasks:

(a) veri­fy that the EU decla­ra­ti­on of con­for­mi­ty refer­red to in Artic­le 47 and the tech­ni­cal docu­men­ta­ti­on refer­red to in Artic­le 11 have been drawn up and that an appro­pria­te con­for­mi­ty assess­ment pro­ce­du­re has been car­ri­ed out by the provider;

(b) keep at the dis­po­sal of the com­pe­tent aut­ho­ri­ties and natio­nal aut­ho­ri­ties or bodies refer­red to in Artic­le 74(10), for a peri­od of 10 years after the high-risk AI system has been pla­ced on the mar­ket or put into ser­vice, the cont­act details of the pro­vi­der that appoin­ted the aut­ho­ri­sed repre­sen­ta­ti­ve, a copy of the EU decla­ra­ti­on of con­for­mi­ty refer­red to in Artic­le 47, the tech­ni­cal docu­men­ta­ti­on and, if appli­ca­ble, the cer­ti­fi­ca­te issued by the noti­fi­ed body;

(c) pro­vi­de a com­pe­tent aut­ho­ri­ty, upon a rea­so­ned request, with all the infor­ma­ti­on and docu­men­ta­ti­on, inclu­ding that refer­red to in point (b) of this sub­pa­ra­graph, neces­sa­ry to demon­stra­te the con­for­mi­ty of a high-risk AI system with the requi­re­ments set out in Sec­tion 2, inclu­ding access to the logs, as refer­red to in Artic­le 12(1), auto­ma­ti­cal­ly gene­ra­ted by the high-risk AI system, to the ext­ent such logs are under the con­trol of the provider ;

(d) coope­ra­te with com­pe­tent aut­ho­ri­ties, upon a rea­so­ned request, in any action the lat­ter take in rela­ti­on to the high-risk AI system, in par­ti­cu­lar to redu­ce and miti­ga­te the risks posed by the high-risk AI system; 

(e) whe­re appli­ca­ble, com­ply with the regi­stra­ti­on obli­ga­ti­ons refer­red to in

Artic­le 49(1), or, if the regi­stra­ti­on is car­ri­ed out by the pro­vi­der its­elf, ensu­re that the infor­ma­ti­on refer­red to in point 3 of Sec­tion A of Annex VIII is correct.

The man­da­te shall empower the aut­ho­ri­sed repre­sen­ta­ti­ve to be addres­sed, in addi­ti­on to or instead of the pro­vi­der, by the com­pe­tent aut­ho­ri­ties, on all issues rela­ted to ensu­ring com­pli­ance with this Regulation.

4. The aut­ho­ri­sed repre­sen­ta­ti­ve shall ter­mi­na­te the man­da­te if it con­siders or has rea­son to con­sider the pro­vi­der to be acting con­tra­ry to its obli­ga­ti­ons pur­su­ant to this Regu­la­ti­on. In such a case, it shall imme­dia­te­ly inform the rele­vant mar­ket sur­veil­lan­ce aut­ho­ri­ty, as well as, whe­re appli­ca­ble, the rele­vant noti­fi­ed body, about the ter­mi­na­ti­on of the man­da­te and the rea­sons therefor.

Artic­le 23 Obli­ga­ti­ons of importers

1. Befo­re pla­cing a high-risk AI system on the mar­ket, importers shall ensu­re that the system is in con­for­mi­ty with this Regu­la­ti­on by veri­fy­ing that:

(a) the rele­vant con­for­mi­ty assess­ment pro­ce­du­re refer­red to in Artic­le 43 has been car­ri­ed out by the pro­vi­der of the high-risk AI system;

(b) the pro­vi­der has drawn up the tech­ni­cal docu­men­ta­ti­on in accordance with Artic­le 11 and Annex IV;

(c) the system bears the requi­red CE mar­king and is accom­pa­nied by the EU decla­ra­ti­on of con­for­mi­ty refer­red to in Artic­le 47 and ins­truc­tions for use;

(d) the pro­vi­der has appoin­ted an aut­ho­ri­sed repre­sen­ta­ti­ve in accordance with Artic­le 22(1).

2. Whe­re an importer has suf­fi­ci­ent rea­son to con­sider that a high-risk AI system is not in con­for­mi­ty with this Regu­la­ti­on, or is fal­si­fi­ed, or accom­pa­nied by fal­si­fi­ed docu­men­ta­ti­on, it shall not place the system on the mar­ket until it has been brought into con­for­mi­ty. Whe­re the high-risk AI system pres­ents a risk within the mea­ning of Artic­le 79(1), the importer shall inform the pro­vi­der of the system, the aut­ho­ri­sed repre­sen­ta­ti­ve and the mar­ket sur­veil­lan­ce aut­ho­ri­ties to that effect.

3. Importers shall indi­ca­te their name, regi­stered trade name or regi­stered trade mark, and the address at which they can be cont­ac­ted on the high-risk AI system and on its pack­a­ging or its accom­pany­ing docu­men­ta­ti­on, whe­re applicable.

4. Importers shall ensu­re that, while a high-risk AI system is under their respon­si­bi­li­ty, sto­rage or trans­port con­di­ti­ons, whe­re appli­ca­ble, do not jeo­par­di­se its com­pli­ance with the requi­re­ments set out in Sec­tion 2.

5. Importers shall keep, for a peri­od of 10 years after the high-risk AI system has been pla­ced on the mar­ket or put into ser­vice, a copy of the cer­ti­fi­ca­te issued by the noti­fi­ed body, whe­re appli­ca­ble, of the ins­truc­tions for use, and of the EU decla­ra­ti­on of con­for­mi­ty refer­red to in Artic­le 47.

6. Importers shall pro­vi­de the rele­vant com­pe­tent aut­ho­ri­ties, upon a rea­so­ned request, with all the neces­sa­ry infor­ma­ti­on and docu­men­ta­ti­on, inclu­ding that refer­red to in para­graph 5, to demon­stra­te the con­for­mi­ty of a high-risk AI system with the requi­re­ments set out in Sec­tion 2 in a lan­guage which can be easi­ly under­s­tood by them. For this pur­po­se, they shall also ensu­re that the tech­ni­cal docu­men­ta­ti­on can be made available to tho­se authorities.

7. Importers shall coope­ra­te with the rele­vant com­pe­tent aut­ho­ri­ties in any action tho­se aut­ho­ri­ties take in rela­ti­on to a high-risk AI system pla­ced on the mar­ket by the importers, in par­ti­cu­lar to redu­ce and miti­ga­te the risks posed by it.

Artic­le 24 Obli­ga­ti­ons of distributors

1. Befo­re making a high-risk AI system available on the mar­ket, dis­tri­bu­tors shall veri­fy that it bears the requi­red CE mar­king, that it is accom­pa­nied by a copy of the EU decla­ra­ti­on of con­for­mi­ty refer­red to in Artic­le 47 and ins­truc­tions for use, and that the pro­vi­der and the importer of that system, as appli­ca­ble, have com­plied with their respec­ti­ve obli­ga­ti­ons as laid down in Artic­le 16, points (b) and (c) and Artic­le 23(3).

2. Whe­re a dis­tri­bu­tor con­siders or has rea­son to con­sider, on the basis of the infor­ma­ti­on in its pos­ses­si­on, that a high-risk AI system is not in con­for­mi­ty with the requi­re­ments set out in Sec­tion 2, it shall not make the high-risk AI system available on the mar­ket until the system has been brought into con­for­mi­ty with tho­se requi­re­ments. Fur­ther­mo­re, whe­re the high-risk AI system pres­ents a risk within the mea­ning of Artic­le 79(1), the dis­tri­bu­tor shall inform the pro­vi­der or the importer of the system, as appli­ca­ble, to that effect.

3. Dis­tri­bu­tors shall ensu­re that, while a high-risk AI system is under their respon­si­bi­li­ty, sto­rage or trans­port con­di­ti­ons, whe­re appli­ca­ble, do not jeo­par­di­se the com­pli­ance of the system with the requi­re­ments set out in Sec­tion 2.

4. A dis­tri­bu­tor that con­siders or has rea­son to con­sider, on the basis of the infor­ma­ti­on in its

pos­ses­si­on, a high-risk AI system which it has made available on the mar­ket not to be in con­for­mi­ty with the requi­re­ments set out in Sec­tion 2, shall take the cor­rec­ti­ve actions neces­sa­ry to bring that system into con­for­mi­ty with tho­se requi­re­ments, to with­draw it or recall it, or shall ensu­re that the pro­vi­der, the importer or any rele­vant ope­ra­tor, as appro­pria­te, takes tho­se cor­rec­ti­ve actions. Whe­re the high-risk AI system pres­ents a risk within the mea­ning of Artic­le 79(1), the dis­tri­bu­tor shall imme­dia­te­ly inform the pro­vi­der or importer of the system and the aut­ho­ri­ties com­pe­tent for the high-risk AI system con­cer­ned, giving details, in par­ti­cu­lar, of the non-com­pli­ance and of any cor­rec­ti­ve actions taken.

5. Upon a rea­so­ned request from a rele­vant com­pe­tent aut­ho­ri­ty, dis­tri­bu­tors of a high-risk AI system shall pro­vi­de that aut­ho­ri­ty with all the infor­ma­ti­on and docu­men­ta­ti­on regar­ding their actions pur­su­ant to para­graphs 1 to 4 neces­sa­ry to demon­stra­te the con­for­mi­ty of that system with the requi­re­ments set out in Sec­tion 2. 

6. Dis­tri­bu­tors shall coope­ra­te with the rele­vant com­pe­tent aut­ho­ri­ties in any action tho­se aut­ho­ri­ties take in rela­ti­on to a high-risk AI system made available on the mar­ket by the dis­tri­bu­tors, in par­ti­cu­lar to redu­ce or miti­ga­te the risk posed by it.

Artic­le 25 Respon­si­bi­li­ties along the AI value chain

(83) In light of the natu­re and com­ple­xi­ty of the value chain for AI systems and in line with the New Legis­la­ti­ve Frame­work, it is essen­ti­al to ensu­re legal cer­tain­ty and faci­li­ta­te the com­pli­ance with this Regu­la­ti­on. The­r­e­fo­re, it is neces­sa­ry to cla­ri­fy the role and the spe­ci­fic obli­ga­ti­ons of rele­vant ope­ra­tors along that value chain, such as importers and dis­tri­bu­tors who may con­tri­bu­te to the deve­lo­p­ment of AI systems. In cer­tain situa­tions tho­se ope­ra­tors could act in more than one role at the same time and should the­r­e­fo­re ful­fil cumu­la­tively all rele­vant obli­ga­ti­ons asso­cia­ted with tho­se roles. For exam­p­le, an ope­ra­tor could act as a dis­tri­bu­tor and an importer at the same time.

(88) Along the AI value chain mul­ti­ple par­ties often sup­p­ly AI systems, tools and ser­vices but also com­pon­ents or pro­ce­s­ses that are incor­po­ra­ted by the pro­vi­der into the AI system with various objec­ti­ves, inclu­ding the model trai­ning, model retrai­ning, model test­ing and eva­lua­ti­on, inte­gra­ti­on into soft­ware, or other aspects of model deve­lo­p­ment. Tho­se par­ties have an important role to play in the value chain towards the pro­vi­der of the high-risk AI system into which their AI systems, tools, ser­vices, com­pon­ents or pro­ce­s­ses are inte­gra­ted, and should pro­vi­de by writ­ten agree­ment this pro­vi­der with the neces­sa­ry infor­ma­ti­on, capa­bi­li­ties, tech­ni­cal access and other assi­stance based on the gene­ral­ly ack­now­led­ged sta­te of the art, in order to enable the pro­vi­der to ful­ly com­ply with the obli­ga­ti­ons set out in this Regu­la­ti­on, wit­hout com­pro­mi­sing their own intellec­tu­al pro­per­ty rights or trade secrets.

1. Any dis­tri­bu­tor, importer, deployer or other third-par­ty shall be con­side­red to be a pro­vi­der of a high-risk AI system for the pur­po­ses of this Regu­la­ti­on and shall be sub­ject to the obli­ga­ti­ons of the pro­vi­der under Artic­le 16, in any of the fol­lo­wing circumstances:

(a) they put their name or trade­mark on a high-risk AI system alre­a­dy pla­ced on the mar­ket or put into ser­vice, wit­hout pre­ju­di­ce to con­trac­tu­al arran­ge­ments sti­pu­la­ting that the obli­ga­ti­ons are other­wi­se allocated;

(b) they make a sub­stan­ti­al modi­fi­ca­ti­on to a high-risk AI system that has alre­a­dy been pla­ced on the mar­ket or has alre­a­dy been put into ser­vice in such a way that it remains a high-risk AI system pur­su­ant to Artic­le 6;

(c) they modi­fy the inten­ded pur­po­se of an AI system, inclu­ding a gene­ral-pur­po­se AI system, which has not been clas­si­fi­ed as high-risk and has alre­a­dy been pla­ced on the mar­ket or put into ser­vice in such a way that the AI system con­cer­ned beco­mes a high-risk AI system in accordance with Artic­le 6.

(84) To ensu­re legal cer­tain­ty, it is neces­sa­ry to cla­ri­fy that, under cer­tain spe­ci­fic con­di­ti­ons, any dis­tri­bu­tor, importer, deployer or other third-par­ty should be con­side­red to be a pro­vi­der of a high-risk AI system and the­r­e­fo­re assu­me all the rele­vant obli­ga­ti­ons. This would be the case if that par­ty puts its name or trade­mark on a high-risk AI system alre­a­dy pla­ced on the mar­ket or put into ser­vice, wit­hout pre­ju­di­ce to con­trac­tu­al arran­ge­ments sti­pu­la­ting that the obli­ga­ti­ons are allo­ca­ted other­wi­se. This would also be the case if that par­ty makes a sub­stan­ti­al modi­fi­ca­ti­on to a high-risk AI system that has alre­a­dy been pla­ced on the mar­ket or has alre­a­dy been put into ser­vice in a way that it remains a high-risk AI system in accordance with this Regu­la­ti­on, or if it modi­fi­es the inten­ded pur­po­se of an AI system, inclu­ding a gene­ral-pur­po­se AI system, which has not been clas­si­fi­ed as high-risk and has alre­a­dy been pla­ced on the mar­ket or put into ser­vice, in a way that the AI system beco­mes a high-risk AI system in accordance with this Regu­la­ti­on. Tho­se pro­vi­si­ons should app­ly wit­hout pre­ju­di­ce to more spe­ci­fic pro­vi­si­ons estab­lished in cer­tain Uni­on har­mo­ni­sa­ti­on legis­la­ti­on based on the New Legis­la­ti­ve Frame­work, tog­e­ther with which this Regu­la­ti­on should app­ly. For exam­p­le, Artic­le 16(2) of Regu­la­ti­on (EU) 2017/745, estab­li­shing that cer­tain chan­ges should not be con­side­red to be modi­fi­ca­ti­ons of a device that could affect its com­pli­ance with the appli­ca­ble requi­re­ments, should con­ti­n­ue to app­ly to high-risk AI systems that are medi­cal devices within the mea­ning of that Regulation.

2. Whe­re the cir­cum­stances refer­red to in para­graph 1 occur, the pro­vi­der that initi­al­ly pla­ced the AI system on the mar­ket or put it into ser­vice shall no lon­ger be con­side­red to be a pro­vi­der of that spe­ci­fic AI system for the pur­po­ses of this Regu­la­ti­on. That initi­al pro­vi­der shall clo­se­ly coope­ra­te with new pro­vi­ders and shall make available the neces­sa­ry infor­ma­ti­on and pro­vi­de the rea­son­ab­ly expec­ted tech­ni­cal access and other assi­stance that are requi­red for the ful­film­ent of the obli­ga­ti­ons set out in this Regu­la­ti­on, in par­ti­cu­lar regar­ding the com­pli­ance with the con­for­mi­ty assess­ment of high-risk AI systems. This para­graph shall not app­ly in cases whe­re the initi­al pro­vi­der has cle­ar­ly spe­ci­fi­ed that its AI system is not to be chan­ged into a high-risk AI system and the­r­e­fo­re does not fall under the obli­ga­ti­on to hand over the documentation. 

(86) Whe­re, under the con­di­ti­ons laid down in this Regu­la­ti­on, the pro­vi­der that initi­al­ly pla­ced the AI system on the mar­ket or put it into ser­vice should no lon­ger be con­side­red to be the pro­vi­der for the pur­po­ses of this Regu­la­ti­on, and when that pro­vi­der has not express­ly exclu­ded the chan­ge of the AI system into a high-risk AI system, the for­mer pro­vi­der should none­thel­ess clo­se­ly coope­ra­te and make available the neces­sa­ry infor­ma­ti­on and pro­vi­de the rea­son­ab­ly expec­ted tech­ni­cal access and other assi­stance that are requi­red for the ful­film­ent of the obli­ga­ti­ons set out in this Regu­la­ti­on, in par­ti­cu­lar regar­ding the com­pli­ance with the con­for­mi­ty assess­ment of high-risk AI systems.

3. In the case of high-risk AI systems that are safe­ty com­pon­ents of pro­ducts cover­ed by the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Sec­tion A of Annex I, the pro­duct manu­fac­tu­rer shall be con­side­red to be the pro­vi­der of the high-risk AI system, and shall be sub­ject to the obli­ga­ti­ons under Artic­le 16 under eit­her of the fol­lo­wing circumstances:

(a) the high-risk AI system is pla­ced on the mar­ket tog­e­ther with the pro­duct under the name or trade­mark of the pro­duct manufacturer;

(b) the high-risk AI system is put into ser­vice under the name or trade­mark of the pro­duct manu­fac­tu­rer after the pro­duct has been pla­ced on the market.

(87) In addi­ti­on, whe­re a high-risk AI system that is a safe­ty com­po­nent of a pro­duct which falls within the scope of Uni­on har­mo­ni­sa­ti­on legis­la­ti­on based on the New Legis­la­ti­ve Frame­work is not pla­ced on the mar­ket or put into ser­vice inde­pendent­ly from the pro­duct, the pro­duct manu­fac­tu­rer defi­ned in that legis­la­ti­on should com­ply with the obli­ga­ti­ons of the pro­vi­der estab­lished in this Regu­la­ti­on and should, in par­ti­cu­lar, ensu­re that the AI system embedded in the final pro­duct com­plies with the requi­re­ments of this Regulation.

4. The pro­vi­der of a high-risk AI system and the third par­ty that sup­plies an AI system, tools, ser­vices, com­pon­ents, or pro­ce­s­ses that are used or inte­gra­ted in a high-risk AI system shall, by writ­ten agree­ment, spe­ci­fy the neces­sa­ry infor­ma­ti­on, capa­bi­li­ties, tech­ni­cal access and other assi­stance based on the gene­ral­ly ack­now­led­ged sta­te of the art, in order to enable the pro­vi­der of the high-risk AI system to ful­ly com­ply with the obli­ga­ti­ons set out in this Regu­la­ti­on. This para­graph shall not app­ly to third par­ties making acce­s­si­ble to the public tools, ser­vices, pro­ce­s­ses, or com­pon­ents, other than gene­ral-pur­po­se AI models, under a free and open-source licence.

The AI Office may deve­lop and recom­mend vol­un­t­a­ry model terms for con­tracts bet­ween pro­vi­ders of high-risk AI systems and third par­ties that sup­p­ly tools, ser­vices, com­pon­ents or pro­ce­s­ses that are used for or inte­gra­ted into high-risk AI systems. When deve­lo­ping tho­se vol­un­t­a­ry model terms, the AI Office shall take into account pos­si­ble con­trac­tu­al requi­re­ments appli­ca­ble in spe­ci­fic sec­tors or busi­ness cases. The vol­un­t­a­ry model terms shall be published and be available free of char­ge in an easi­ly usable elec­tro­nic format.

(89) Third par­ties making acce­s­si­ble to the public tools, ser­vices, pro­ce­s­ses, or AI com­pon­ents other than gene­ral-pur­po­se AI models, should not be man­da­ted to com­ply with requi­re­ments tar­ge­ting the respon­si­bi­li­ties along the AI value chain, in par­ti­cu­lar towards the pro­vi­der that has used or inte­gra­ted them, when tho­se tools, ser­vices, pro­ce­s­ses, or AI com­pon­ents are made acce­s­si­ble under a free and open-source licence. Deve­lo­pers of free and open-source tools, ser­vices, pro­ce­s­ses, or AI com­pon­ents other than gene­ral-pur­po­se AI models should be encou­ra­ged to imple­ment wide­ly adopted docu­men­ta­ti­on prac­ti­ces, such as model cards and data sheets, as a way to acce­le­ra­te infor­ma­ti­on sha­ring along the AI value chain, allo­wing the pro­mo­ti­on of trust­wor­t­hy AI systems in the Union.

(90) The Com­mis­si­on could deve­lop and recom­mend vol­un­t­a­ry model con­trac­tu­al terms bet­ween pro­vi­ders of high-risk AI systems and third par­ties that sup­p­ly tools, ser­vices, com­pon­ents or pro­ce­s­ses that are used or inte­gra­ted in high-risk AI systems, to faci­li­ta­te the coope­ra­ti­on along the value chain. When deve­lo­ping vol­un­t­a­ry model con­trac­tu­al terms, the Com­mis­si­on should also take into account pos­si­ble con­trac­tu­al requi­re­ments appli­ca­ble in spe­ci­fic sec­tors or busi­ness cases.

5. Para­graphs 2 and 3 are wit­hout pre­ju­di­ce to the need to obser­ve and pro­tect intellec­tu­al pro­per­ty rights, con­fi­den­ti­al busi­ness infor­ma­ti­on and trade secrets in accordance with Uni­on and natio­nal law.

Artic­le 26 Obli­ga­ti­ons of deployers of high-risk AI systems

(91) Given the natu­re of AI systems and the risks to safe­ty and fun­da­men­tal rights pos­si­bly asso­cia­ted with their use, inclu­ding as regards the need to ensu­re pro­per moni­to­ring of the per­for­mance of an AI system in a real-life set­ting, it is appro­pria­te to set spe­ci­fic respon­si­bi­li­ties for deployers. Deployers should in par­ti­cu­lar take appro­pria­te tech­ni­cal and orga­ni­sa­tio­nal mea­su­res to ensu­re they use high-risk AI systems in accordance with the ins­truc­tions of use and cer­tain other obli­ga­ti­ons should be pro­vi­ded for with regard to moni­to­ring of the func­tio­ning of the AI systems and with regard to record-kee­ping, as appro­pria­te. Fur­ther­mo­re, deployers should ensu­re that the per­sons assi­gned to imple­ment the ins­truc­tions for use and human over­sight as set out in this Regu­la­ti­on have the neces­sa­ry com­pe­tence, in par­ti­cu­lar an ade­qua­te level of AI liter­a­cy, trai­ning and aut­ho­ri­ty to pro­per­ly ful­fil tho­se tasks. Tho­se obli­ga­ti­ons should be wit­hout pre­ju­di­ce to other deployer obli­ga­ti­ons in rela­ti­on to high-risk AI systems under Uni­on or natio­nal law. 

(92) This Regu­la­ti­on is wit­hout pre­ju­di­ce to obli­ga­ti­ons for employers to inform or to inform and con­sult workers or their repre­sen­ta­ti­ves under Uni­on or natio­nal law and prac­ti­ce, inclu­ding Direc­ti­ve 2002/14/EC of the Euro­pean Par­lia­ment and of the Coun­cil , on decis­i­ons to put into ser­vice or use AI systems. It remains neces­sa­ry to ensu­re infor­ma­ti­on of workers and their repre­sen­ta­ti­ves on the plan­ned deployment of high-risk AI systems at the work­place whe­re the con­di­ti­ons for tho­se infor­ma­ti­on or infor­ma­ti­on and con­sul­ta­ti­on obli­ga­ti­ons in other legal instru­ments are not ful­fil­led. Moreo­ver, such infor­ma­ti­on right is ancil­la­ry and neces­sa­ry to the objec­ti­ve of pro­tec­ting fun­da­men­tal rights that under­lies this Regu­la­ti­on. The­r­e­fo­re, an infor­ma­ti­on requi­re­ment to that effect should be laid down in this Regu­la­ti­on, wit­hout affec­ting any exi­sting rights of workers.

1. Deployers of high-risk AI systems shall take appro­pria­te tech­ni­cal and orga­ni­sa­tio­nal mea­su­res to ensu­re they use such systems in accordance with the ins­truc­tions for use accom­pany­ing the systems, pur­su­ant to para­graphs 3 and 6.

2. Deployers shall assign human over­sight to natu­ral per­sons who have the neces­sa­ry com­pe­tence, trai­ning and aut­ho­ri­ty, as well as the neces­sa­ry support.

3. The obli­ga­ti­ons set out in para­graphs 1 and 2, are wit­hout pre­ju­di­ce to other deployer obli­ga­ti­ons under Uni­on or natio­nal law and to the deployer’s free­dom to orga­ni­se its own resour­ces and acti­vi­ties for the pur­po­se of imple­men­ting the human over­sight mea­su­res indi­ca­ted by the provider.

4. Wit­hout pre­ju­di­ce to para­graphs 1 and 2, to the ext­ent the deployer exer­cis­es con­trol over the input data, that deployer shall ensu­re that input data is rele­vant and suf­fi­ci­ent­ly repre­sen­ta­ti­ve in view of the inten­ded pur­po­se of the high-risk AI system. 

5. Deployers shall moni­tor the ope­ra­ti­on of the high-risk AI system on the basis of the ins­truc­tions for use and, whe­re rele­vant, inform pro­vi­ders in accordance with Artic­le 72. Whe­re deployers have rea­son to con­sider that the use of the high-risk AI system in accordance with the ins­truc­tions may result in that AI system pre­sen­ting a risk within the mea­ning of Artic­le 79(1), they shall, wit­hout undue delay, inform the pro­vi­der or dis­tri­bu­tor and the rele­vant mar­ket sur­veil­lan­ce aut­ho­ri­ty, and shall sus­pend the use of that system. Whe­re deployers have iden­ti­fi­ed a serious inci­dent, they shall also imme­dia­te­ly inform first the pro­vi­der, and then the importer or dis­tri­bu­tor and the rele­vant mar­ket sur­veil­lan­ce aut­ho­ri­ties of that inci­dent. If the deployer is not able to reach the pro­vi­der, Artic­le 73 shall app­ly muta­tis mut­an­dis. This obli­ga­ti­on shall not cover sen­si­ti­ve ope­ra­tio­nal data of deployers of AI systems which are law enforce­ment authorities.

For deployers that are finan­cial insti­tu­ti­ons sub­ject to requi­re­ments regar­ding their inter­nal gover­nan­ce, arran­ge­ments or pro­ce­s­ses under Uni­on finan­cial ser­vices law, the moni­to­ring obli­ga­ti­on set out in the first sub­pa­ra­graph shall be dee­med to be ful­fil­led by com­ply­ing with the rules on inter­nal gover­nan­ce arran­ge­ments, pro­ce­s­ses and mecha­nisms pur­su­ant to the rele­vant finan­cial ser­vice law.

6. Deployers of high-risk AI systems shall keep the logs auto­ma­ti­cal­ly gene­ra­ted by that high-risk AI system to the ext­ent such logs are under their con­trol, for a peri­od appro­pria­te to the inten­ded pur­po­se of the high-risk AI system, of at least six months, unless pro­vi­ded other­wi­se in appli­ca­ble Uni­on or natio­nal law, in par­ti­cu­lar in Uni­on law on the pro­tec­tion of per­so­nal data.

Deployers that are finan­cial insti­tu­ti­ons sub­ject to requi­re­ments regar­ding their inter­nal gover­nan­ce, arran­ge­ments or pro­ce­s­ses under Uni­on finan­cial ser­vices law shall main­tain the logs as part of the docu­men­ta­ti­on kept pur­su­ant to the rele­vant Uni­on finan­cial ser­vice law.

7. Befo­re put­ting into ser­vice or using a high-risk AI system at the work­place, deployers who are employers shall inform workers’ repre­sen­ta­ti­ves and the affec­ted workers that they will be sub­ject to the use of the high-risk AI system. This infor­ma­ti­on shall be pro­vi­ded, whe­re appli­ca­ble, in accordance with the rules and pro­ce­du­res laid down in Uni­on and natio­nal law and prac­ti­ce on infor­ma­ti­on of workers and their representatives.

8. Deployers of high-risk AI systems that are public aut­ho­ri­ties, or Uni­on insti­tu­ti­ons, bodies, offices or agen­ci­es shall com­ply with the regi­stra­ti­on obli­ga­ti­ons refer­red to in Artic­le 49. When such deployers find that the high-risk AI system that they envi­sa­ge using has not been regi­stered in the EU data­ba­se refer­red to in Artic­le 71, they shall not use that system and shall inform the pro­vi­der or the distributor.

9. Whe­re appli­ca­ble, deployers of high-risk AI systems shall use the infor­ma­ti­on pro­vi­ded under Artic­le 13 of this Regu­la­ti­on to com­ply with their obli­ga­ti­on to car­ry out a data pro­tec­tion impact assess­ment under Artic­le 35 of Regu­la­ti­on (EU) 2016/679 or Artic­le 27 of Direc­ti­ve (EU) 2016/680.

10. Wit­hout pre­ju­di­ce to Direc­ti­ve (EU) 2016/680, in the frame­work of an inve­sti­ga­ti­on for the tar­ge­ted search of a per­son suspec­ted or con­vic­ted of having com­mit­ted a cri­mi­nal offence, the deployer of a high-risk AI system for post-remo­te bio­me­tric iden­ti­fi­ca­ti­on shall request an aut­ho­ri­sa­ti­on, ex ante, or wit­hout undue delay and no later than 48 hours, by a judi­cial aut­ho­ri­ty or an admi­ni­stra­ti­ve aut­ho­ri­ty who­se decis­i­on is bin­ding and sub­ject to judi­cial review, for the use of that system, except when it is used for the initi­al iden­ti­fi­ca­ti­on of a poten­ti­al suspect based on objec­ti­ve and veri­fia­ble facts direct­ly lin­ked to the offence. Each use shall be limi­t­ed to what is strict­ly neces­sa­ry for the inve­sti­ga­ti­on of a spe­ci­fic cri­mi­nal offence.

If the aut­ho­ri­sa­ti­on reque­sted pur­su­ant to the first sub­pa­ra­graph is rejec­ted, the use of the post-remo­te bio­me­tric iden­ti­fi­ca­ti­on system lin­ked to that reque­sted aut­ho­ri­sa­ti­on shall be stop­ped with imme­dia­te effect and the per­so­nal data lin­ked to the use of the high-risk AI system for which the aut­ho­ri­sa­ti­on was reque­sted shall be deleted.

In no case shall such high-risk AI system for post-remo­te bio­me­tric iden­ti­fi­ca­ti­on be used for law enforce­ment pur­po­ses in an unt­ar­ge­ted way, wit­hout any link to a cri­mi­nal offence, a cri­mi­nal pro­ce­e­ding, a genui­ne and pre­sent or genui­ne and fore­seeable thre­at of a cri­mi­nal offence, or the search for a spe­ci­fic miss­ing per­son. It shall be ensu­red that no decis­i­on that pro­du­ces an adver­se legal effect on a per­son may be taken by the law enforce­ment aut­ho­ri­ties based sole­ly on the out­put of such post-remo­te bio­me­tric iden­ti­fi­ca­ti­on systems.

This para­graph is wit­hout pre­ju­di­ce to Artic­le 9 of Regu­la­ti­on (EU) 2016/679 and Artic­le 10 of Direc­ti­ve (EU) 2016/680 for the pro­ce­s­sing of bio­me­tric data.

Regard­less of the pur­po­se or deployer, each use of such high-risk AI systems shall be docu­men­ted in the rele­vant poli­ce file and shall be made available to the rele­vant mar­ket sur­veil­lan­ce aut­ho­ri­ty and the natio­nal data pro­tec­tion aut­ho­ri­ty upon request, exclu­ding the dis­clo­sure of sen­si­ti­ve ope­ra­tio­nal data rela­ted to law enforce­ment. This sub­pa­ra­graph shall be wit­hout pre­ju­di­ce to the powers con­fer­red by Direc­ti­ve (EU) 2016/680 on super­vi­so­ry authorities. 

Deployers shall sub­mit annu­al reports to the rele­vant mar­ket sur­veil­lan­ce and natio­nal data pro­tec­tion aut­ho­ri­ties on their use of post-remo­te bio­me­tric iden­ti­fi­ca­ti­on systems, exclu­ding the dis­clo­sure of sen­si­ti­ve ope­ra­tio­nal data rela­ted to law enforce­ment. The reports may be aggre­ga­ted to cover more than one deployment.

Mem­ber Sta­tes may intro­du­ce, in accordance with Uni­on law, more rest­ric­ti­ve laws on the use of post-remo­te bio­me­tric iden­ti­fi­ca­ti­on systems.

(95) Wit­hout pre­ju­di­ce to appli­ca­ble Uni­on law, in par­ti­cu­lar Regu­la­ti­on (EU) 2016/679 and Direc­ti­ve (EU) 2016/680, con­side­ring the intru­si­ve natu­re of post-remo­te bio­me­tric iden­ti­fi­ca­ti­on systems, the use of post-remo­te bio­me­tric iden­ti­fi­ca­ti­on systems should be sub­ject to safe­guards. Post-remo­te bio­me­tric iden­ti­fi­ca­ti­on systems should always be used in a way that is pro­por­tio­na­te, legi­ti­ma­te and strict­ly neces­sa­ry, and thus tar­ge­ted, in terms of the indi­vi­du­als to be iden­ti­fi­ed, the loca­ti­on, tem­po­ral scope and based on a clo­sed data set of legal­ly acqui­red video foota­ge. In any case, post-remo­te bio­me­tric iden­ti­fi­ca­ti­on systems should not be used in the frame­work of law enforce­ment to lead to indis­cri­mi­na­te sur­veil­lan­ce. The con­di­ti­ons for post-remo­te bio­me­tric iden­ti­fi­ca­ti­on should in any case not pro­vi­de a basis to cir­cum­vent the con­di­ti­ons of the pro­hi­bi­ti­on and strict excep­ti­ons for real time remo­te bio­me­tric identification.

11. Wit­hout pre­ju­di­ce to Artic­le 50 of this Regu­la­ti­on, deployers of high-risk AI systems refer­red to in Annex III that make decis­i­ons or assist in making decis­i­ons rela­ted to natu­ral per­sons shall inform the natu­ral per­sons that they are sub­ject to the use of the high-risk AI system. For high-risk AI systems used for law enforce­ment pur­po­ses Artic­le 13 of Direc­ti­ve (EU) 2016/680 shall apply.

(93) Whilst risks rela­ted to AI systems can result from the way such systems are desi­gned, risks can as well stem from how such AI systems are used. Deployers of high-risk AI system the­r­e­fo­re play a cri­ti­cal role in ensu­ring that fun­da­men­tal rights are pro­tec­ted, com­ple­men­ting the obli­ga­ti­ons of the pro­vi­der when deve­lo­ping the AI system. Deployers are best pla­ced to under­stand how the high-risk AI system will be used con­cre­te­ly and can the­r­e­fo­re iden­ti­fy poten­ti­al signi­fi­cant risks that were not fore­seen in the deve­lo­p­ment pha­se, due to a more pre­cise know­ledge of the con­text of use, the per­sons or groups of per­sons likely to be affec­ted, inclu­ding vul­nerable groups. Deployers of high-risk AI systems listed in an annex to this Regu­la­ti­on also play a cri­ti­cal role in informing natu­ral per­sons and should, when they make decis­i­ons or assist in making decis­i­ons rela­ted to natu­ral per­sons, whe­re appli­ca­ble, inform the natu­ral per­sons that they are sub­ject to the use of the high-risk AI system. This infor­ma­ti­on should include the inten­ded pur­po­se and the type of decis­i­ons it makes. The deployer should also inform the natu­ral per­sons about their right to an expl­ana­ti­on pro­vi­ded under this Regu­la­ti­on. With regard to high-risk AI systems used for law enforce­ment pur­po­ses, that obli­ga­ti­on should be imple­men­ted in accordance with Artic­le 13 of Direc­ti­ve (EU) 2016/680.

12. Deployers shall coope­ra­te with the rele­vant com­pe­tent aut­ho­ri­ties in any action tho­se aut­ho­ri­ties take in rela­ti­on to the high-risk AI system in order to imple­ment this Regulation. 

Artic­le 27 Fun­da­men­tal rights impact assess­ment for high-risk AI systems

1. Pri­or to deploying a high-risk AI system refer­red to in Artic­le 6(2), with the excep­ti­on of high-risk AI systems inten­ded to be used in the area listed in point 2 of Annex III, deployers that are bodies gover­ned by public law, or are pri­va­te enti­ties pro­vi­ding public ser­vices, and deployers of high-risk AI systems refer­red to in points 5 (b) and (c) of Annex III, shall per­form an assess­ment of the impact on fun­da­men­tal rights that the use of such system may pro­du­ce. For that pur­po­se, deployers shall per­form an assess­ment con­si­sting of:

(a) a descrip­ti­on of the deployer’s pro­ce­s­ses in which the high-risk AI system will be used in line with its inten­ded purpose;

(b) a descrip­ti­on of the peri­od of time within which, and the fre­quen­cy with which, each high-risk AI system is inten­ded to be used;

(c) the cate­go­ries of natu­ral per­sons and groups likely to be affec­ted by its use in the spe­ci­fic context;

(d) the spe­ci­fic risks of harm likely to have an impact on the cate­go­ries of natu­ral per­sons or groups of per­sons iden­ti­fi­ed pur­su­ant to point (c) of this para­graph, taking into account the infor­ma­ti­on given by the pro­vi­der pur­su­ant to Artic­le 13;

(e) a descrip­ti­on of the imple­men­ta­ti­on of human over­sight mea­su­res, accor­ding to the ins­truc­tions for use;

(f) the mea­su­res to be taken in the case of the mate­ria­li­sa­ti­on of tho­se risks, inclu­ding the arran­ge­ments for inter­nal gover­nan­ce and com­plaint mechanisms.

2. The obli­ga­ti­on laid down in para­graph 1 applies to the first use of the high-risk AI system. The deployer may, in simi­lar cases, rely on pre­vious­ly con­duc­ted fun­da­men­tal rights impact assess­ments or exi­sting impact assess­ments car­ri­ed out by pro­vi­der. If, during the use of the high-risk AI system, the deployer con­siders that any of the ele­ments listed in para­graph 1 has chan­ged or is no lon­ger up to date, the deployer shall take the neces­sa­ry steps to update the information.

3. Once the assess­ment refer­red to in para­graph 1 of this Artic­le has been per­for­med, the deployer shall noti­fy the mar­ket sur­veil­lan­ce aut­ho­ri­ty of its results, sub­mit­ting the fil­led-out tem­p­la­te refer­red to in para­graph 5 of this Artic­le as part of the noti­fi­ca­ti­on. In the case refer­red to in Artic­le 46(1), deployers may be exempt from that obli­ga­ti­on to notify.

4. If any of the obli­ga­ti­ons laid down in this Artic­le is alre­a­dy met through the data pro­tec­tion impact assess­ment con­duc­ted pur­su­ant to Artic­le 35 of Regu­la­ti­on (EU) 2016/679 or Artic­le 27 of Direc­ti­ve (EU) 2016/680, the fun­da­men­tal rights impact assess­ment refer­red to in para­graph 1 of this Artic­le shall com­ple­ment that data pro­tec­tion impact assessment.

5. The AI Office shall deve­lop a tem­p­la­te for a que­sti­on­n­aire, inclu­ding through an auto­ma­ted tool, to faci­li­ta­te deployers in com­ply­ing with their obli­ga­ti­ons under this Artic­le in a sim­pli­fi­ed manner.

(96) In order to effi­ci­ent­ly ensu­re that fun­da­men­tal rights are pro­tec­ted, deployers of high-risk AI systems that are bodies gover­ned by public law, or pri­va­te enti­ties pro­vi­ding public ser­vices and deployers of cer­tain high-risk AI systems listed in an annex to this Regu­la­ti­on, such as ban­king or insu­rance enti­ties, should car­ry out a fun­da­men­tal rights impact assess­ment pri­or to put­ting it into use. Ser­vices important for indi­vi­du­als that are of public natu­re may also be pro­vi­ded by pri­va­te enti­ties. Pri­va­te enti­ties pro­vi­ding such public ser­vices are lin­ked to tasks in the public inte­rest such as in the are­as of edu­ca­ti­on, heal­th­ca­re, social ser­vices, housing, admi­ni­stra­ti­on of justi­ce. The aim of the fun­da­men­tal rights impact assess­ment is for the deployer to iden­ti­fy the spe­ci­fic risks to the rights of indi­vi­du­als or groups of indi­vi­du­als likely to be affec­ted, iden­ti­fy mea­su­res to be taken in the case of a mate­ria­li­sa­ti­on of tho­se risks. The impact assess­ment should be per­for­med pri­or to deploying the high-risk AI system, and should be updated when the deployer con­siders that any of the rele­vant fac­tors have chan­ged. The impact assess­ment should iden­ti­fy the deployer’s rele­vant pro­ce­s­ses in which the high-risk AI system will be used in line with its inten­ded pur­po­se, and should include a descrip­ti­on of the peri­od of time and fre­quen­cy in which the system is inten­ded to be used as well as of spe­ci­fic cate­go­ries of natu­ral per­sons and groups who are likely to be affec­ted in the spe­ci­fic con­text of use.

The assess­ment should also include the iden­ti­fi­ca­ti­on of spe­ci­fic risks of harm likely to have an impact on the fun­da­men­tal rights of tho­se per­sons or groups. While per­forming this assess­ment, the deployer should take into account infor­ma­ti­on rele­vant to a pro­per assess­ment of the impact, inclu­ding but not limi­t­ed to the infor­ma­ti­on given by the pro­vi­der of the high-risk AI system in the ins­truc­tions for use. In light of the risks iden­ti­fi­ed, deployers should deter­mi­ne mea­su­res to be taken in the case of a mate­ria­li­sa­ti­on of tho­se risks, inclu­ding for exam­p­le gover­nan­ce arran­ge­ments in that spe­ci­fic con­text of use, such as arran­ge­ments for human over­sight accor­ding to the ins­truc­tions of use or, com­plaint hand­ling and redress pro­ce­du­res, as they could be instru­men­tal in miti­ga­ting risks to fun­da­men­tal rights in con­cre­te use-cases. After per­forming that impact assess­ment, the deployer should noti­fy the rele­vant mar­ket sur­veil­lan­ce aut­ho­ri­ty. Whe­re appro­pria­te, to coll­ect rele­vant infor­ma­ti­on neces­sa­ry to per­form the impact assess­ment, deployers of high-risk AI system, in par­ti­cu­lar when AI systems are used in the public sec­tor, could invol­ve rele­vant stake­hol­ders, inclu­ding the repre­sen­ta­ti­ves of groups of per­sons likely to be affec­ted by the AI system, inde­pen­dent experts, and civil socie­ty orga­ni­sa­ti­ons in con­duc­ting such impact assess­ments and desig­ning mea­su­res to be taken in the case of mate­ria­li­sa­ti­on of the risks. The Euro­pean Arti­fi­ci­al Intel­li­gence Office (AI Office) should deve­lop a tem­p­la­te for a que­sti­on­n­aire in order to faci­li­ta­te com­pli­ance and redu­ce the admi­ni­stra­ti­ve bur­den for deployers.

Sec­tion 4 Noti­fy­ing Aut­ho­ri­ties And Noti­fi­ed Bodies

Artic­le 28 Noti­fy­ing authorities

1. Each Mem­ber Sta­te shall desi­gna­te or estab­lish at least one noti­fy­ing aut­ho­ri­ty respon­si­ble for set­ting up and car­ry­ing out the neces­sa­ry pro­ce­du­res for the assess­ment, desi­gna­ti­on and noti­fi­ca­ti­on of con­for­mi­ty assess­ment bodies and for their moni­to­ring. Tho­se pro­ce­du­res shall be deve­lo­ped in coope­ra­ti­on bet­ween the noti­fy­ing aut­ho­ri­ties of all Mem­ber States.

2. Mem­ber Sta­tes may deci­de that the assess­ment and moni­to­ring refer­red to in para­graph 1 is to be car­ri­ed out by a natio­nal accre­di­ta­ti­on body within the mea­ning of, and in accordance with, Regu­la­ti­on (EC) No 765/2008 .

3. Noti­fy­ing aut­ho­ri­ties shall be estab­lished, orga­ni­s­ed and ope­ra­ted in such a way that no con­flict of inte­rest ari­ses with con­for­mi­ty assess­ment bodies, and that the objec­ti­vi­ty and impar­tia­li­ty of their acti­vi­ties are safeguarded.

4. Noti­fy­ing aut­ho­ri­ties shall be orga­ni­s­ed in such a way that decis­i­ons rela­ting to the noti­fi­ca­ti­on of con­for­mi­ty assess­ment bodies are taken by com­pe­tent per­sons dif­fe­rent from tho­se who car­ri­ed out the assess­ment of tho­se bodies.

5. Noti­fy­ing aut­ho­ri­ties shall offer or pro­vi­de neither any acti­vi­ties that con­for­mi­ty assess­ment bodies per­form, nor any con­sul­tan­cy ser­vices on a com­mer­cial or com­pe­ti­ti­ve basis.

6. Noti­fy­ing aut­ho­ri­ties shall safe­guard the con­fi­den­tia­li­ty of the infor­ma­ti­on that they obtain, in accordance with Artic­le 78.

7. Noti­fy­ing aut­ho­ri­ties shall have an ade­qua­te num­ber of com­pe­tent per­son­nel at their dis­po­sal for the pro­per per­for­mance of their tasks. Com­pe­tent per­son­nel shall have the neces­sa­ry exper­ti­se, whe­re appli­ca­ble, for their func­tion, in fields such as infor­ma­ti­on tech­no­lo­gies, AI and law, inclu­ding the super­vi­si­on of fun­da­men­tal rights.

Artic­le 29 Appli­ca­ti­on of a con­for­mi­ty assess­ment body for notification

1. Con­for­mi­ty assess­ment bodies shall sub­mit an appli­ca­ti­on for noti­fi­ca­ti­on to the noti­fy­ing aut­ho­ri­ty of the Mem­ber Sta­te in which they are established.

2. The appli­ca­ti­on for noti­fi­ca­ti­on shall be accom­pa­nied by a descrip­ti­on of the con­for­mi­ty assess­ment acti­vi­ties, the con­for­mi­ty assess­ment modu­le or modu­les and the types of AI systems for which the con­for­mi­ty assess­ment body claims to be com­pe­tent, as well as by an accre­di­ta­ti­on cer­ti­fi­ca­te, whe­re one exists, issued by a natio­nal accre­di­ta­ti­on body attest­ing that the con­for­mi­ty assess­ment body ful­fils the requi­re­ments laid down in Artic­le 31.

Any valid docu­ment rela­ted to exi­sting desi­gna­ti­ons of the appli­cant noti­fi­ed body under any other Uni­on har­mo­ni­sa­ti­on legis­la­ti­on shall be added.

3. Whe­re the con­for­mi­ty assess­ment body con­cer­ned can­not pro­vi­de an accre­di­ta­ti­on cer­ti­fi­ca­te, it shall pro­vi­de the noti­fy­ing aut­ho­ri­ty with all the docu­men­ta­ry evi­dence neces­sa­ry for the veri­fi­ca­ti­on, reco­gni­ti­on and regu­lar moni­to­ring of its com­pli­ance with the requi­re­ments laid down in Artic­le 31.

4. For noti­fi­ed bodies which are desi­gna­ted under any other Uni­on har­mo­ni­sa­ti­on legis­la­ti­on, all docu­ments and cer­ti­fi­ca­tes lin­ked to tho­se desi­gna­ti­ons may be used to sup­port their desi­gna­ti­on pro­ce­du­re under this Regu­la­ti­on, as appro­pria­te. The noti­fi­ed body shall update the docu­men­ta­ti­on refer­red to in para­graphs 2 and 3 of this Artic­le when­ever rele­vant chan­ges occur, in order to enable the aut­ho­ri­ty respon­si­ble for noti­fi­ed bodies to moni­tor and veri­fy con­ti­nuous com­pli­ance with all the requi­re­ments laid down in Artic­le 31.

Artic­le 30 Noti­fi­ca­ti­on procedure

1. Noti­fy­ing aut­ho­ri­ties may noti­fy only con­for­mi­ty assess­ment bodies which have satis­fied the requi­re­ments laid down in Artic­le 31.

2. Noti­fy­ing aut­ho­ri­ties shall noti­fy the Com­mis­si­on and the other Mem­ber Sta­tes, using the elec­tro­nic noti­fi­ca­ti­on tool deve­lo­ped and mana­ged by the Com­mis­si­on, of each con­for­mi­ty assess­ment body refer­red to in para­graph 1.

3. The noti­fi­ca­ti­on refer­red to in para­graph 2 of this Artic­le shall include full details of the con­for­mi­ty assess­ment acti­vi­ties, the con­for­mi­ty assess­ment modu­le or modu­les, the types of AI systems con­cer­ned, and the rele­vant atte­sta­ti­on of com­pe­tence. Whe­re a noti­fi­ca­ti­on is not based on an accre­di­ta­ti­on cer­ti­fi­ca­te as refer­red to in Artic­le 29(2), the noti­fy­ing aut­ho­ri­ty shall pro­vi­de the Com­mis­si­on and the other Mem­ber Sta­tes with docu­men­ta­ry evi­dence which attests to the com­pe­tence of the con­for­mi­ty assess­ment body and to the arran­ge­ments in place to ensu­re that that body will be moni­to­red regu­lar­ly and will con­ti­n­ue to satis­fy the requi­re­ments laid down in Artic­le 31.

4. The con­for­mi­ty assess­ment body con­cer­ned may per­form the acti­vi­ties of a noti­fi­ed body only whe­re no objec­tions are rai­sed by the Com­mis­si­on or the other Mem­ber Sta­tes within two weeks of a noti­fi­ca­ti­on by a noti­fy­ing aut­ho­ri­ty whe­re it inclu­des an accre­di­ta­ti­on cer­ti­fi­ca­te refer­red to in Artic­le 29(2), or within two months of a noti­fi­ca­ti­on by the noti­fy­ing aut­ho­ri­ty whe­re it inclu­des docu­men­ta­ry evi­dence refer­red to in Artic­le 29(3).

5. Whe­re objec­tions are rai­sed, the Com­mis­si­on shall, wit­hout delay, enter into con­sul­ta­ti­ons with the rele­vant Mem­ber Sta­tes and the con­for­mi­ty assess­ment body. In view the­reof, the Com­mis­si­on shall deci­de whe­ther the aut­ho­ri­sa­ti­on is justi­fi­ed. The Com­mis­si­on shall address its decis­i­on to the Mem­ber Sta­te con­cer­ned and to the rele­vant con­for­mi­ty assess­ment body.

Artic­le 31 Requi­re­ments rela­ting to noti­fi­ed bodies

1. A noti­fi­ed body shall be estab­lished under the natio­nal law of a Mem­ber Sta­te and shall have legal personality.

2. Noti­fi­ed bodies shall satis­fy the orga­ni­sa­tio­nal, qua­li­ty manage­ment, resour­ces and pro­cess requi­re­ments that are neces­sa­ry to ful­fil their tasks, as well as sui­ta­ble cyber­se­cu­ri­ty requirements.

3. The orga­ni­sa­tio­nal struc­tu­re, allo­ca­ti­on of respon­si­bi­li­ties, report­ing lines and ope­ra­ti­on of noti­fi­ed bodies shall ensu­re con­fi­dence in their per­for­mance, and in the results of the con­for­mi­ty assess­ment acti­vi­ties that the noti­fi­ed bodies conduct.

4. Noti­fi­ed bodies shall be inde­pen­dent of the pro­vi­der of a high-risk AI system in rela­ti­on to which they per­form con­for­mi­ty assess­ment acti­vi­ties. Noti­fi­ed bodies shall also be inde­pen­dent of any other ope­ra­tor having an eco­no­mic inte­rest in high-risk AI systems asses­sed, as well as of any com­pe­ti­tors of the pro­vi­der. This shall not pre­clude the use of asses­sed high-risk AI systems that are neces­sa­ry for the ope­ra­ti­ons of the con­for­mi­ty assess­ment body, or the use of such high-risk AI systems for per­so­nal purposes.

5. Neither a con­for­mi­ty assess­ment body, its top-level manage­ment nor the per­son­nel respon­si­ble for car­ry­ing out its con­for­mi­ty assess­ment tasks shall be direct­ly invol­ved in the design, deve­lo­p­ment, mar­ke­ting or use of high-risk AI systems, nor shall they repre­sent the par­ties enga­ged in tho­se acti­vi­ties. They shall not enga­ge in any acti­vi­ty that might con­flict with their inde­pen­dence of jud­ge­ment or inte­gri­ty in rela­ti­on to con­for­mi­ty assess­ment acti­vi­ties for which they are noti­fi­ed. This shall, in par­ti­cu­lar, app­ly to con­sul­tan­cy services.

6. Noti­fi­ed bodies shall be orga­ni­s­ed and ope­ra­ted so as to safe­guard the inde­pen­dence, objec­ti­vi­ty and impar­tia­li­ty of their acti­vi­ties. Noti­fi­ed bodies shall docu­ment and imple­ment a struc­tu­re and pro­ce­du­res to safe­guard impar­tia­li­ty and to pro­mo­te and app­ly the prin­ci­ples of impar­tia­li­ty throug­hout their orga­ni­sa­ti­on, per­son­nel and assess­ment activities.

7. Noti­fi­ed bodies shall have docu­men­ted pro­ce­du­res in place ensu­ring that their per­son­nel, com­mit­tees, sub­si­dia­ries, sub­con­trac­tors and any asso­cia­ted body or per­son­nel of exter­nal bodies main­tain, in accordance with Artic­le 78, the con­fi­den­tia­li­ty of the infor­ma­ti­on which comes into their pos­ses­si­on during the per­for­mance of con­for­mi­ty assess­ment acti­vi­ties, except when its dis­clo­sure is requi­red by law. The staff of noti­fi­ed bodies shall be bound to obser­ve pro­fes­sio­nal sec­re­cy with regard to all infor­ma­ti­on obtai­ned in car­ry­ing out their tasks under this Regu­la­ti­on, except in rela­ti­on to the noti­fy­ing aut­ho­ri­ties of the Mem­ber Sta­te in which their acti­vi­ties are car­ri­ed out.

8. Noti­fi­ed bodies shall have pro­ce­du­res for the per­for­mance of acti­vi­ties which take due account of the size of a pro­vi­der, the sec­tor in which it operates,

its struc­tu­re, and the degree of com­ple­xi­ty of the AI system concerned.

9. Noti­fi­ed bodies shall take out appro­pria­te lia­bi­li­ty insu­rance for their con­for­mi­ty assess­ment acti­vi­ties, unless lia­bi­li­ty is assu­med by the Mem­ber Sta­te in which they are estab­lished in accordance with natio­nal law or that Mem­ber Sta­te is its­elf direct­ly respon­si­ble for the con­for­mi­ty assessment.

10. Noti­fi­ed bodies shall be capa­ble of car­ry­ing out all their tasks under this Regu­la­ti­on with the hig­hest degree of pro­fes­sio­nal inte­gri­ty and the requi­si­te com­pe­tence in the spe­ci­fic field, whe­ther tho­se tasks are car­ri­ed out by noti­fi­ed bodies them­sel­ves or on their behalf and under their responsibility.

11. Noti­fi­ed bodies shall have suf­fi­ci­ent inter­nal com­pe­ten­ces to be able effec­tively to eva­lua­te the tasks con­duc­ted by exter­nal par­ties on their behalf. The noti­fi­ed body shall have per­ma­nent avai­la­bi­li­ty of suf­fi­ci­ent admi­ni­stra­ti­ve, tech­ni­cal, legal and sci­en­ti­fic per­son­nel who pos­sess expe­ri­ence and know­ledge rela­ting to the rele­vant types of AI systems, data and data com­pu­ting, and rela­ting to the requi­re­ments set out in Sec­tion 2.

12. Noti­fi­ed bodies shall par­ti­ci­pa­te in coor­di­na­ti­on acti­vi­ties as refer­red to in Artic­le 38. They shall also take part direct­ly, or be repre­sen­ted in, Euro­pean stan­dar­di­sati­on orga­ni­sa­ti­ons, or ensu­re that they are awa­re and up to date in respect of rele­vant standards.

Artic­le 32 Pre­sump­ti­on of con­for­mi­ty with requi­re­ments rela­ting to noti­fi­ed bodies

Whe­re a con­for­mi­ty assess­ment body demon­stra­tes its con­for­mi­ty with the cri­te­ria laid down in the rele­vant har­mo­ni­s­ed stan­dards or parts the­reof, the refe­ren­ces of which have been published in the Offi­ci­al Jour­nal of the Euro­pean Uni­on, it shall be pre­su­med to com­ply with the requi­re­ments set out in Artic­le 31 in so far as the appli­ca­ble har­mo­ni­s­ed stan­dards cover tho­se requirements.

Artic­le 33 Sub­si­dia­ries of noti­fi­ed bodies and subcontracting

1. Whe­re a noti­fi­ed body sub­con­tracts spe­ci­fic tasks con­nec­ted with the con­for­mi­ty assess­ment or has recour­se to a sub­si­dia­ry, it shall ensu­re that the sub­con­trac­tor or the sub­si­dia­ry meets the requi­re­ments laid down in Artic­le 31, and shall inform the noti­fy­ing aut­ho­ri­ty accordingly.

2. Noti­fi­ed bodies shall take full respon­si­bi­li­ty for the tasks per­for­med by any sub­con­trac­tors or subsidiaries.

3. Acti­vi­ties may be sub­con­trac­ted or car­ri­ed out by a sub­si­dia­ry only with the agree­ment of the pro­vi­der. Noti­fi­ed bodies shall make a list of their sub­si­dia­ries publicly available.

4. The rele­vant docu­ments con­cer­ning the assess­ment of the qua­li­fi­ca­ti­ons of the sub­con­trac­tor or the sub­si­dia­ry and the work car­ri­ed out by them under this Regu­la­ti­on shall be kept at the dis­po­sal of the noti­fy­ing aut­ho­ri­ty for a peri­od of five years from the ter­mi­na­ti­on date of the subcontracting. 

Artic­le 34 Ope­ra­tio­nal obli­ga­ti­ons of noti­fi­ed bodies

1. Noti­fi­ed bodies shall veri­fy the con­for­mi­ty of high-risk AI systems in accordance with the con­for­mi­ty assess­ment pro­ce­du­res set out in Artic­le 43.

2. Noti­fi­ed bodies shall avo­id unneces­sa­ry bur­dens for pro­vi­ders when per­forming their acti­vi­ties, and take due account of the size of the pro­vi­der, the sec­tor in which it ope­ra­tes, its struc­tu­re and the degree of com­ple­xi­ty of the high-risk AI system con­cer­ned, in par­ti­cu­lar in view of mini­mi­sing admi­ni­stra­ti­ve bur­dens and com­pli­ance costs for micro- and small enter­pri­ses within the mea­ning of Recom­men­da­ti­on 2003/361/EC. The noti­fi­ed body shall, nevert­hel­ess, respect the degree of rigour and the level of pro­tec­tion requi­red for the com­pli­ance of the high-risk AI system with the requi­re­ments of this Regulation.

3. Noti­fi­ed bodies shall make available and sub­mit upon request all rele­vant docu­men­ta­ti­on, inclu­ding the pro­vi­ders’ docu­men­ta­ti­on, to the noti­fy­ing aut­ho­ri­ty refer­red to in Artic­le 28 to allow that aut­ho­ri­ty to con­duct its assess­ment, desi­gna­ti­on, noti­fi­ca­ti­on and moni­to­ring acti­vi­ties, and to faci­li­ta­te the assess­ment out­lined in this Section.

Artic­le 35 Iden­ti­fi­ca­ti­on num­bers and lists of noti­fi­ed bodies

1. The Com­mis­si­on shall assign a sin­gle iden­ti­fi­ca­ti­on num­ber to each noti­fi­ed body, even whe­re a body is noti­fi­ed under more than one Uni­on act.

2. The Com­mis­si­on shall make publicly available the list of the bodies noti­fi­ed under this Regu­la­ti­on, inclu­ding their iden­ti­fi­ca­ti­on num­bers and the acti­vi­ties for which they have been noti­fi­ed. The Com­mis­si­on shall ensu­re that the list is kept up to date.

Artic­le 36 Chan­ges to notifications

1. The noti­fy­ing aut­ho­ri­ty shall noti­fy the Com­mis­si­on and the other Mem­ber Sta­tes of any rele­vant chan­ges to the noti­fi­ca­ti­on of a noti­fi­ed body via the elec­tro­nic noti­fi­ca­ti­on tool refer­red to in Artic­le 30(2).

2. The pro­ce­du­res laid down in Artic­les 29 and 30 shall app­ly to exten­si­ons of the scope of the notification.

For chan­ges to the noti­fi­ca­ti­on other than exten­si­ons of its scope, the pro­ce­du­res laid down in para­graphs (3) to (9) shall apply.

3. Whe­re a noti­fi­ed body deci­des to cea­se its con­for­mi­ty assess­ment acti­vi­ties, it shall inform the noti­fy­ing aut­ho­ri­ty and the pro­vi­ders con­cer­ned as soon as pos­si­ble and, in the case of a plan­ned ces­sa­ti­on, at least one year befo­re cea­sing its acti­vi­ties. The cer­ti­fi­ca­tes of the noti­fi­ed body may remain valid for a peri­od of nine months after ces­sa­ti­on of the noti­fi­ed body’s acti­vi­ties, on con­di­ti­on that ano­ther noti­fi­ed body has con­firm­ed in wri­ting that it will assu­me respon­si­bi­li­ties for the high-risk AI systems cover­ed by tho­se cer­ti­fi­ca­tes. The lat­ter noti­fi­ed body shall com­ple­te a full assess­ment of the high-risk AI systems affec­ted by the end of that nine-month-peri­od befo­re issuing new cer­ti­fi­ca­tes for tho­se systems. Whe­re the noti­fi­ed body has cea­sed its acti­vi­ty, the noti­fy­ing aut­ho­ri­ty shall with­draw the designation.

4. Whe­re a noti­fy­ing aut­ho­ri­ty has suf­fi­ci­ent rea­son to con­sider that a noti­fi­ed body no lon­ger meets the requi­re­ments laid down in Artic­le 31, or that it is fai­ling to ful­fil its obli­ga­ti­ons, the noti­fy­ing aut­ho­ri­ty shall wit­hout delay inve­sti­ga­te the mat­ter with the utmost dili­gence. In that con­text, it shall inform the noti­fi­ed body con­cer­ned about the objec­tions rai­sed and give it the pos­si­bi­li­ty to make its views known. If the noti­fy­ing aut­ho­ri­ty comes to the con­clu­si­on that the noti­fi­ed body no lon­ger meets the requi­re­ments laid down in Artic­le 31 or that it is fai­ling to ful­fil its obli­ga­ti­ons, it shall rest­rict, sus­pend or with­draw the desi­gna­ti­on as appro­pria­te, depen­ding on the serious­ness of the fail­ure to meet tho­se requi­re­ments or ful­fil tho­se obli­ga­ti­ons. It shall imme­dia­te­ly inform the Com­mis­si­on and the other Mem­ber Sta­tes accordingly.

5. Whe­re its desi­gna­ti­on has been sus­pen­ded, rest­ric­ted, or ful­ly or par­ti­al­ly with­drawn, the noti­fi­ed body shall inform the pro­vi­ders con­cer­ned within 10 days.

6. In the event of the rest­ric­tion, sus­pen­si­on or with­dra­wal of a desi­gna­ti­on, the noti­fy­ing aut­ho­ri­ty shall take appro­pria­te steps to ensu­re that the files of the noti­fi­ed body con­cer­ned are kept, and to make them available to noti­fy­ing aut­ho­ri­ties in other Mem­ber Sta­tes and to mar­ket sur­veil­lan­ce aut­ho­ri­ties at their request.

7. In the event of the rest­ric­tion, sus­pen­si­on or with­dra­wal of a desi­gna­ti­on, the noti­fy­ing aut­ho­ri­ty shall:

(a) assess the impact on the cer­ti­fi­ca­tes issued by the noti­fi­ed body;

(b) sub­mit a report on its fin­dings to the Com­mis­si­on and the other Mem­ber Sta­tes within three months of having noti­fi­ed the chan­ges to the designation;

(c) requi­re the noti­fi­ed body to sus­pend or with­draw, within a rea­sonable peri­od of time deter­mi­ned by the aut­ho­ri­ty, any cer­ti­fi­ca­tes which were undu­ly issued, in order to ensu­re the con­ti­nuing con­for­mi­ty of high-risk AI systems on the market;

(d) inform the Com­mis­si­on and the Mem­ber Sta­tes about cer­ti­fi­ca­tes the sus­pen­si­on or with­dra­wal of which it has required;

(e) pro­vi­de the natio­nal com­pe­tent aut­ho­ri­ties of the Mem­ber Sta­te in which the pro­vi­der has its regi­stered place of busi­ness with all rele­vant infor­ma­ti­on about the cer­ti­fi­ca­tes of which it has requi­red the sus­pen­si­on or with­dra­wal; that aut­ho­ri­ty shall take the appro­pria­te mea­su­res, whe­re neces­sa­ry, to avo­id a poten­ti­al risk to health, safe­ty or fun­da­men­tal rights.

8. With the excep­ti­on of cer­ti­fi­ca­tes undu­ly issued, and whe­re a desi­gna­ti­on has been sus­pen­ded or rest­ric­ted, the cer­ti­fi­ca­tes shall remain valid in one of the fol­lo­wing circumstances:

(a) the noti­fy­ing aut­ho­ri­ty has con­firm­ed, within one month of the sus­pen­si­on or rest­ric­tion, that the­re is no risk to health, safe­ty or fun­da­men­tal rights in rela­ti­on to cer­ti­fi­ca­tes affec­ted by the sus­pen­si­on or rest­ric­tion, and the noti­fy­ing aut­ho­ri­ty has out­lined a time­line for actions to reme­dy the sus­pen­si­on or rest­ric­tion; or

(b) the noti­fy­ing aut­ho­ri­ty has con­firm­ed that no cer­ti­fi­ca­tes rele­vant to the sus­pen­si­on will be issued, amen­ded or re-issued during the cour­se of the sus­pen­si­on or rest­ric­tion, and sta­tes whe­ther the noti­fi­ed body has the capa­bi­li­ty of con­ti­nuing to moni­tor and remain respon­si­ble for exi­sting cer­ti­fi­ca­tes issued for the peri­od of the sus­pen­si­on or rest­ric­tion; in the event that the noti­fy­ing aut­ho­ri­ty deter­mi­nes that the noti­fi­ed body does not have the capa­bi­li­ty to sup­port exi­sting cer­ti­fi­ca­tes issued, the pro­vi­der of the system cover­ed by the cer­ti­fi­ca­te shall con­firm in wri­ting to the natio­nal com­pe­tent aut­ho­ri­ties of the Mem­ber Sta­te in which it has its regi­stered place of busi­ness, within three months of the sus­pen­si­on or rest­ric­tion, that ano­ther qua­li­fi­ed noti­fi­ed body is tem­po­r­a­ri­ly assum­ing the func­tions of the noti­fi­ed body to moni­tor and remain respon­si­ble for the cer­ti­fi­ca­tes during the peri­od of sus­pen­si­on or restriction.

9. With the excep­ti­on of cer­ti­fi­ca­tes undu­ly issued, and whe­re a desi­gna­ti­on has been with­drawn, the cer­ti­fi­ca­tes shall remain valid for a peri­od of nine months under the fol­lo­wing circumstances:

(a) the natio­nal com­pe­tent aut­ho­ri­ty of the Mem­ber Sta­te in which the pro­vi­der of the high-risk AI system cover­ed by the cer­ti­fi­ca­te has its regi­stered place of busi­ness has con­firm­ed that the­re is no risk to health, safe­ty or fun­da­men­tal rights asso­cia­ted with the high-risk AI systems con­cer­ned; and

(b) ano­ther noti­fi­ed body has con­firm­ed in wri­ting that it will assu­me imme­dia­te respon­si­bi­li­ty for tho­se AI systems and com­ple­tes its assess­ment within 12 months of the with­dra­wal of the designation.

In the cir­cum­stances refer­red to in the first sub­pa­ra­graph, the natio­nal com­pe­tent aut­ho­ri­ty of the Mem­ber Sta­te in which the pro­vi­der of the system cover­ed by the cer­ti­fi­ca­te has its place of busi­ness may extend the pro­vi­sio­nal vali­di­ty of the cer­ti­fi­ca­tes for addi­tio­nal peri­ods of three months, which shall not exce­ed 12 months in total.

The natio­nal com­pe­tent aut­ho­ri­ty or the noti­fi­ed body assum­ing the func­tions of the noti­fi­ed body affec­ted by the chan­ge of desi­gna­ti­on shall imme­dia­te­ly inform the Com­mis­si­on, the other Mem­ber Sta­tes and the other noti­fi­ed bodies thereof.

Artic­le 37 Chall­enge to the com­pe­tence of noti­fi­ed bodies

1. The Com­mis­si­on shall, whe­re neces­sa­ry, inve­sti­ga­te all cases whe­re the­re are rea­sons to doubt the com­pe­tence of a noti­fi­ed body or the con­tin­ued ful­film­ent by a noti­fi­ed body of the requi­re­ments laid down in Artic­le 31 and of its appli­ca­ble responsibilities.

2. The noti­fy­ing aut­ho­ri­ty shall pro­vi­de the Com­mis­si­on, on request, with all rele­vant infor­ma­ti­on rela­ting to the noti­fi­ca­ti­on or the main­ten­an­ce of the com­pe­tence of the noti­fi­ed body concerned.

3. The Com­mis­si­on shall ensu­re that all sen­si­ti­ve infor­ma­ti­on obtai­ned in the cour­se of its inve­sti­ga­ti­ons pur­su­ant to this Artic­le is trea­ted con­fi­den­ti­al­ly in accordance with Artic­le 78.

4. Whe­re the Com­mis­si­on ascer­ta­ins that a noti­fi­ed body does not meet or no lon­ger meets the requi­re­ments for its noti­fi­ca­ti­on, it shall inform the noti­fy­ing Mem­ber Sta­te accor­din­gly and request it to take the neces­sa­ry cor­rec­ti­ve mea­su­res, inclu­ding the sus­pen­si­on or with­dra­wal of the noti­fi­ca­ti­on if neces­sa­ry. Whe­re the Mem­ber Sta­te fails to take the neces­sa­ry cor­rec­ti­ve mea­su­res, the Com­mis­si­on may, by means of an imple­men­ting act, sus­pend, rest­rict or with­draw the desi­gna­ti­on. That imple­men­ting act shall be adopted in accordance with the exami­na­ti­on pro­ce­du­re refer­red to in Artic­le 98(2).

Artic­le 38 Coor­di­na­ti­on of noti­fi­ed bodies

1. The Com­mis­si­on shall ensu­re that, with regard to high-risk AI systems, appro­pria­te coor­di­na­ti­on and coope­ra­ti­on bet­ween noti­fi­ed bodies acti­ve in the con­for­mi­ty assess­ment pro­ce­du­res pur­su­ant to this Regu­la­ti­on are put in place and pro­per­ly ope­ra­ted in the form of a sec­to­ral group of noti­fi­ed bodies.

2. Each noti­fy­ing aut­ho­ri­ty shall ensu­re that the bodies noti­fi­ed by it par­ti­ci­pa­te in the work of a group refer­red to in para­graph 1, direct­ly or through desi­gna­ted representatives.

3. The Com­mis­si­on shall pro­vi­de for the exch­an­ge of know­ledge and best prac­ti­ces bet­ween noti­fy­ing authorities.

Artic­le 39 Con­for­mi­ty assess­ment bodies of third countries

Con­for­mi­ty assess­ment bodies estab­lished under the law of a third coun­try with which the Uni­on has con­clu­ded an agree­ment may be aut­ho­ri­sed to car­ry out the acti­vi­ties of noti­fi­ed bodies under this Regu­la­ti­on, pro­vi­ded that they meet the requi­re­ments laid down in Artic­le 31 or they ensu­re an equi­va­lent level of compliance.

Sec­tion 5 Stan­dards Con­for­mi­ty Assess­ment Cer­ti­fi­ca­tes Registration

Artic­le 40 Har­mo­ni­s­ed stan­dards and stan­dar­di­sati­on deliverables

1. High-risk AI systems or gene­ral-pur­po­se AI models which are in con­for­mi­ty with har­mo­ni­s­ed stan­dards or parts the­reof the refe­ren­ces of which have been published in the Offi­ci­al Jour­nal of the Euro­pean Uni­on in accordance with Regu­la­ti­on (EU) No 1025/2012 shall be pre­su­med to be in con­for­mi­ty with the requi­re­ments set out in Sec­tion 2 of this Chap­ter or, as appli­ca­ble, with the obli­ga­ti­ons set out in of Chap­ter V, Sec­tions 2 and 3, of this Regu­la­ti­on, to the ext­ent that tho­se stan­dards cover tho­se requi­re­ments or obligations.

2. In accordance with Artic­le 10 of Regu­la­ti­on (EU) (No) 1025/2012, the Com­mis­si­on shall issue, wit­hout undue delay, stan­dar­di­sati­on requests cove­ring all requi­re­ments set out in Sec­tion 2 of this Chap­ter and, as appli­ca­ble, stan­dar­di­sati­on requests cove­ring obli­ga­ti­ons set out in Chap­ter V, Sec­tions 2 and 3, of this Regu­la­ti­on. The stan­dar­di­sati­on request shall also ask for deli­ver­a­bles on report­ing and docu­men­ta­ti­on pro­ce­s­ses to impro­ve AI systems’ resour­ce per­for­mance, such as redu­cing the high-risk AI system’s con­sump­ti­on of ener­gy and of other resour­ces during its life­cy­cle, and on the ener­gy-effi­ci­ent deve­lo­p­ment of gene­ral-pur­po­se AI models. When pre­pa­ring a stan­dar­di­sati­on request, the Com­mis­si­on shall con­sult the Board and rele­vant stake­hol­ders, inclu­ding the advi­so­ry forum.

When issuing a stan­dar­di­sati­on request to Euro­pean stan­dar­di­sati­on orga­ni­sa­ti­ons, the Com­mis­si­on shall spe­ci­fy that stan­dards have to be clear, con­si­stent, inclu­ding with the stan­dards deve­lo­ped in the various sec­tors for pro­ducts cover­ed by the exi­sting Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Annex I, and aiming to ensu­re that high-risk AI systems or gene­ral-pur­po­se AI models pla­ced on the mar­ket or put into ser­vice in the Uni­on meet the rele­vant requi­re­ments or obli­ga­ti­ons laid down in this Regulation.

The Com­mis­si­on shall request the Euro­pean stan­dar­di­sati­on orga­ni­sa­ti­ons to pro­vi­de evi­dence of their best efforts to ful­fil the objec­ti­ves refer­red to in the first and the second sub­pa­ra­graph of this para­graph in accordance with Artic­le 24 of Regu­la­ti­on (EU) No 1025/2012.

3. The par­ti­ci­pan­ts in the stan­dar­di­sati­on pro­cess shall seek to pro­mo­te invest­ment and inno­va­ti­on in AI, inclu­ding through incre­a­sing legal cer­tain­ty, as well as the com­pe­ti­ti­ve­ness and growth of the Uni­on mar­ket, to con­tri­bu­te to streng­thening glo­bal coope­ra­ti­on on stan­dar­di­sati­on and taking into account exi­sting inter­na­tio­nal stan­dards in the field of AI that are con­si­stent with Uni­on values, fun­da­men­tal rights and inte­rests, and to enhan­ce mul­ti-stake­hol­der gover­nan­ce ensu­ring a balan­ced repre­sen­ta­ti­on of inte­rests and the effec­ti­ve par­ti­ci­pa­ti­on of all rele­vant stake­hol­ders in accordance with Artic­les 5, 6, and 7 of Regu­la­ti­on (EU) No 1025/2012.

Artic­le 41 Com­mon specifications

1. The Com­mis­si­on may adopt, imple­men­ting acts estab­li­shing com­mon spe­ci­fi­ca­ti­ons for the requi­re­ments set out in Sec­tion 2 o

f this Chap­ter or, as appli­ca­ble, for the obli­ga­ti­ons set out in Sec­tions 2 and 3 of Chap­ter V whe­re the fol­lo­wing con­di­ti­ons have been fulfilled:

(a) the Com­mis­si­on has reque­sted, pur­su­ant to Artic­le 10(1) of Regu­la­ti­on (EU) No 1025/2012, one or more Euro­pean stan­dar­di­sati­on orga­ni­sa­ti­ons to draft a har­mo­ni­s­ed stan­dard for the requi­re­ments set out in Sec­tion 2 of this Chap­ter, or, as appli­ca­ble, for the obli­ga­ti­ons set out in Sec­tions 2 and 3 of Chap­ter V, and:

(i) the request has not been accept­ed by any of the Euro­pean stan­dar­di­sati­on orga­ni­sa­ti­ons; or

(ii) the har­mo­ni­s­ed stan­dards addres­sing that request are not deli­ver­ed within the dead­line set in accordance with Artic­le 10(1) of Regu­la­ti­on (EU) No 1025/2012; or

(iii) the rele­vant har­mo­ni­s­ed stan­dards insuf­fi­ci­ent­ly address fun­da­men­tal rights con­cerns; or

(iv) the har­mo­ni­s­ed stan­dards do not com­ply with the request; and

(b) no refe­rence to har­mo­ni­s­ed stan­dards cove­ring the requi­re­ments refer­red to in Sec­tion 2 of this Chap­ter or, as appli­ca­ble, the obli­ga­ti­ons refer­red to in Sec­tions 2 and 3 of Chap­ter V has been published in the Offi­ci­al Jour­nal of the Euro­pean Uni­on in accordance with Regu­la­ti­on (EU) No 1025/2012, and no such refe­rence is expec­ted to be published within a rea­sonable period.

When draf­ting the com­mon spe­ci­fi­ca­ti­ons, the Com­mis­si­on shall con­sult the advi­so­ry forum refer­red to in Artic­le 67.

The imple­men­ting acts refer­red to in the first sub­pa­ra­graph of this para­graph shall be adopted in accordance with the exami­na­ti­on pro­ce­du­re refer­red to in Artic­le 98(2).

2. Befo­re pre­pa­ring a draft imple­men­ting act, the Com­mis­si­on shall inform the com­mit­tee refer­red to in Artic­le 22 of Regu­la­ti­on (EU) No 1025/2012 that it con­siders the con­di­ti­ons laid down in para­graph 1 of this Artic­le to be fulfilled. 

3. High-risk AI systems or gene­ral-pur­po­se AI models which are in con­for­mi­ty with the com­mon spe­ci­fi­ca­ti­ons refer­red to in para­graph 1, or parts of tho­se spe­ci­fi­ca­ti­ons, shall be pre­su­med to be in con­for­mi­ty with the requi­re­ments set out in Sec­tion 2of this Chap­ter or, as appli­ca­ble, to com­ply with the obli­ga­ti­ons refer­red to in Sec­tions 2 and 3 of Chap­ter V, to the ext­ent tho­se com­mon spe­ci­fi­ca­ti­ons cover tho­se requi­re­ments or tho­se obligations.

4. Whe­re a har­mo­ni­s­ed stan­dard is adopted by a Euro­pean stan­dar­di­sati­on orga­ni­sa­ti­on and pro­po­sed to the Com­mis­si­on for the publi­ca­ti­on of its refe­rence in the Offi­ci­al Jour­nal of the Euro­pean Uni­on, the Com­mis­si­on shall assess the har­mo­ni­s­ed stan­dard in accordance with Regu­la­ti­on (EU) No 1025/2012. When refe­rence to a har­mo­ni­s­ed stan­dard is published in the Offi­ci­al Jour­nal of the Euro­pean Uni­on, the Com­mis­si­on shall repeal the imple­men­ting acts refer­red to in para­graph 1, or parts the­reof which cover the same requi­re­ments set out in Sec­tion 2 of this Chap­ter or, as appli­ca­ble, the same obli­ga­ti­ons set out in Sec­tions 2 and 3 of Chap­ter V.

5. Whe­re pro­vi­ders of high-risk AI systems or gene­ral-pur­po­se AI models do not com­ply with the com­mon spe­ci­fi­ca­ti­ons refer­red to in para­graph 1, they shall duly justi­fy that they have adopted tech­ni­cal solu­ti­ons that meet the requi­re­ments refer­red to in Sec­tion 2 of this Chap­ter or, as appli­ca­ble, com­ply with the obli­ga­ti­ons set out in Sec­tions 2 and 3 of Chap­ter V to a level at least equi­va­lent thereto.

6. Whe­re a Mem­ber Sta­te con­siders that a com­mon spe­ci­fi­ca­ti­on does not enti­re­ly meet the requi­re­ments set out in Sec­tion 2 or, as appli­ca­ble, com­ply with obli­ga­ti­ons set out in Sec­tions 2 and 3 of Chap­ter V, it shall inform the Com­mis­si­on the­reof with a detail­ed expl­ana­ti­on. The Com­mis­si­on shall assess that infor­ma­ti­on and, if appro­pria­te, amend the imple­men­ting act estab­li­shing the com­mon spe­ci­fi­ca­ti­on concerned.

Artic­le 42 Pre­sump­ti­on of con­for­mi­ty with cer­tain requirements

1. High-risk AI systems that have been trai­ned and tested on data reflec­ting the spe­ci­fic geo­gra­phi­cal, beha­viou­ral, con­tex­tu­al or func­tion­al set­ting within which they are inten­ded to be used shall be pre­su­med to com­ply with the rele­vant requi­re­ments laid down in Artic­le 10(4).

2. High-risk AI systems that have been cer­ti­fi­ed or for which a state­ment of con­for­mi­ty has been issued under a cyber­se­cu­ri­ty sche­me pur­su­ant to Regu­la­ti­on (EU) 2019/881 and the refe­ren­ces of which have been published in the Offi­ci­al Jour­nal of the Euro­pean Uni­on shall be pre­su­med to com­ply with the cyber­se­cu­ri­ty requi­re­ments set out in Artic­le 15 of this Regu­la­ti­on in so far as the cyber­se­cu­ri­ty cer­ti­fi­ca­te or state­ment of con­for­mi­ty or parts the­reof cover tho­se requirements.

(122) It is appro­pria­te that, wit­hout pre­ju­di­ce to the use of har­mo­ni­s­ed stan­dards and com­mon spe­ci­fi­ca­ti­ons, pro­vi­ders of a high-risk AI system that has been trai­ned and tested on data reflec­ting the spe­ci­fic geo­gra­phi­cal, beha­viou­ral, con­tex­tu­al or func­tion­al set­ting within which the AI system is inten­ded to be used, should be pre­su­med to com­ply with the rele­vant mea­su­re pro­vi­ded for under the requi­re­ment on data gover­nan­ce set out in this Regu­la­ti­on. Wit­hout pre­ju­di­ce to the requi­re­ments rela­ted to robust­ness and accu­ra­cy set out in this Regu­la­ti­on, in accordance with Artic­le 54(3) of Regu­la­ti­on (EU) 2019/881, high-risk AI systems that have been cer­ti­fi­ed or for which a state­ment of con­for­mi­ty has been issued under a cyber­se­cu­ri­ty sche­me pur­su­ant to that Regu­la­ti­on and the refe­ren­ces of which have been published in the Offi­ci­al Jour­nal of the Euro­pean Uni­on should be pre­su­med to com­ply with the cyber­se­cu­ri­ty requi­re­ment of this Regu­la­ti­on in so far as the cyber­se­cu­ri­ty cer­ti­fi­ca­te or state­ment of con­for­mi­ty or parts the­reof cover the cyber­se­cu­ri­ty requi­re­ment of this Regu­la­ti­on. This remains wit­hout pre­ju­di­ce to the vol­un­t­a­ry natu­re of that cyber­se­cu­ri­ty scheme.

Artic­le 43 Con­for­mi­ty assessment

(123) In order to ensu­re a high level of trust­wort­hi­ness of high-risk AI systems, tho­se systems should be sub­ject to a con­for­mi­ty assess­ment pri­or to their pla­cing on the mar­ket or put­ting into service.

1. For high-risk AI systems listed in point 1 of Annex III, whe­re, in demon­st­ra­ting the com­pli­ance of a high-risk AI system with the requi­re­ments set out in Sec­tion 2, the pro­vi­der has applied har­mo­ni­s­ed stan­dards refer­red to in Artic­le 40, or, whe­re appli­ca­ble, com­mon spe­ci­fi­ca­ti­ons refer­red to in Artic­le 41, the pro­vi­der shall opt for one of the fol­lo­wing con­for­mi­ty assess­ment pro­ce­du­res based on:

(a) the inter­nal con­trol refer­red to in Annex VI; or

(b) the assess­ment of the qua­li­ty manage­ment system and the assess­ment of the tech­ni­cal docu­men­ta­ti­on, with the invol­vement of a noti­fi­ed body, refer­red to in Annex VII.

In demon­st­ra­ting the com­pli­ance of a high-risk AI system with the requi­re­ments set out in Sec­tion 2, the pro­vi­der shall fol­low the con­for­mi­ty assess­ment pro­ce­du­re set out in Annex VII where:

(a) har­mo­ni­s­ed stan­dards refer­red to in Artic­le 40 do not exist, and com­mon spe­ci­fi­ca­ti­ons refer­red to in Artic­le 41 are not available;

(b) the pro­vi­der has not applied, or has applied only part of, the har­mo­ni­s­ed standard;

(c) the com­mon spe­ci­fi­ca­ti­ons refer­red to in point (a) exist, but the pro­vi­der has not applied them;

(d) one or more of the har­mo­ni­s­ed stan­dards refer­red to in point (a) has been published with a rest­ric­tion, and only on the part of the stan­dard that was restricted.

For the pur­po­ses of the con­for­mi­ty assess­ment pro­ce­du­re refer­red to in Annex VII, the pro­vi­der may choo­se any of the noti­fi­ed bodies. Howe­ver, whe­re the high-risk AI system is inten­ded to be put into ser­vice by law enforce­ment, immi­gra­ti­on or asyl­um aut­ho­ri­ties or by Uni­on insti­tu­ti­ons, bodies, offices or agen­ci­es, the mar­ket sur­veil­lan­ce aut­ho­ri­ty refer­red to in Artic­le 74(8) or (9), as appli­ca­ble, shall act as a noti­fi­ed body.

2. For high-risk AI systems refer­red to in points 2 to 8 of Annex III, pro­vi­ders shall fol­low the con­for­mi­ty assess­ment pro­ce­du­re based on inter­nal con­trol as refer­red to in Annex VI, which does not pro­vi­de for the invol­vement of a noti­fi­ed body. 

3. For high-risk AI systems cover­ed by the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Sec­tion A of Annex I, the pro­vi­der shall fol­low the rele­vant con­for­mi­ty assess­ment pro­ce­du­re as requi­red under tho­se legal acts. The requi­re­ments set out in Sec­tion 2 of this Chap­ter shall app­ly to tho­se high-risk AI systems and shall be part of that assess­ment. Points 4.3., 4.4., 4.5. and the fifth para­graph of point 4.6 of Annex VII shall also apply.

For the pur­po­ses of that assess­ment, noti­fi­ed bodies which have been noti­fi­ed under tho­se legal acts shall be entit­led to con­trol the con­for­mi­ty of the high-risk AI systems with the requi­re­ments set out in Sec­tion 2, pro­vi­ded that the com­pli­ance of tho­se noti­fi­ed bodies with requi­re­ments laid down in Artic­le 31(4), (5), (10) and (11) has been asses­sed in the con­text of the noti­fi­ca­ti­on pro­ce­du­re under tho­se legal acts.

Whe­re a legal act listed in Sec­tion A of Annex I enables the pro­duct manu­fac­tu­rer to opt out from a third-par­ty con­for­mi­ty assess­ment, pro­vi­ded that that manu­fac­tu­rer has applied all har­mo­ni­s­ed stan­dards cove­ring all the rele­vant requi­re­ments, that manu­fac­tu­rer may use that opti­on only if it has also applied har­mo­ni­s­ed stan­dards or, whe­re appli­ca­ble, com­mon spe­ci­fi­ca­ti­ons refer­red to in Artic­le 41, cove­ring all requi­re­ments set out in Sec­tion 2 of this Chapter.

(124) It is appro­pria­te that, in order to mini­mi­se the bur­den on ope­ra­tors and avo­id any pos­si­ble dupli­ca­ti­on, for high-risk AI systems rela­ted to pro­ducts which are cover­ed by exi­sting Uni­on har­mo­ni­sa­ti­on legis­la­ti­on based on the New Legis­la­ti­ve Frame­work, the com­pli­ance of tho­se AI systems with the requi­re­ments of this Regu­la­ti­on should be asses­sed as part of the con­for­mi­ty assess­ment alre­a­dy pro­vi­ded for in that law. The appli­ca­bi­li­ty of the requi­re­ments of this Regu­la­ti­on should thus not affect the spe­ci­fic logic, metho­do­lo­gy or gene­ral struc­tu­re of con­for­mi­ty assess­ment under the rele­vant Uni­on har­mo­ni­sa­ti­on legislation. 

(125) Given the com­ple­xi­ty of high-risk AI systems and the risks that are asso­cia­ted with them, it is important to deve­lop an ade­qua­te con­for­mi­ty assess­ment pro­ce­du­re for high-risk AI systems invol­ving noti­fi­ed bodies, so-cal­led third par­ty con­for­mi­ty assess­ment. Howe­ver, given the cur­rent expe­ri­ence of pro­fes­sio­nal pre-mar­ket cer­ti­fiers in the field of pro­duct safe­ty and the dif­fe­rent natu­re of risks invol­ved, it is appro­pria­te to limit, at least in an initi­al pha­se of appli­ca­ti­on of this Regu­la­ti­on, the scope of appli­ca­ti­on of third-par­ty con­for­mi­ty assess­ment for high-risk AI systems other than tho­se rela­ted to pro­ducts. The­r­e­fo­re, the con­for­mi­ty assess­ment of such systems should be car­ri­ed out as a gene­ral rule by the pro­vi­der under its own respon­si­bi­li­ty, with the only excep­ti­on of AI systems inten­ded to be used for biometrics.

(126) In order to car­ry out third-par­ty con­for­mi­ty assess­ments when so requi­red, noti­fi­ed bodies should be noti­fi­ed under this Regu­la­ti­on by the natio­nal com­pe­tent aut­ho­ri­ties, pro­vi­ded that they com­ply with a set of requi­re­ments, in par­ti­cu­lar on inde­pen­dence, com­pe­tence, absence of con­flicts of inte­rests and sui­ta­ble cyber­se­cu­ri­ty requi­re­ments. Noti­fi­ca­ti­on of tho­se bodies should be sent by natio­nal com­pe­tent aut­ho­ri­ties to the Com­mis­si­on and the other Mem­ber Sta­tes by means of the elec­tro­nic noti­fi­ca­ti­on tool deve­lo­ped and mana­ged by the Com­mis­si­on pur­su­ant to Artic­le R23 of Annex I to Decis­i­on No 768/2008/EC.

(127) In line with Uni­on com­mit­ments under the World Trade Orga­nizati­on Agree­ment on Tech­ni­cal Bar­riers to Trade, it is ade­qua­te to faci­li­ta­te the mutu­al reco­gni­ti­on of con­for­mi­ty assess­ment results pro­du­ced by com­pe­tent con­for­mi­ty assess­ment bodies, inde­pen­dent of the ter­ri­to­ry in which they are estab­lished, pro­vi­ded that tho­se con­for­mi­ty assess­ment bodies estab­lished under the law of a third coun­try meet the appli­ca­ble requi­re­ments of this Regu­la­ti­on and the Uni­on has con­clu­ded an agree­ment to that ext­ent. In this con­text, the Com­mis­si­on should actively explo­re pos­si­ble inter­na­tio­nal instru­ments for that pur­po­se and in par­ti­cu­lar pur­sue the con­clu­si­on of mutu­al reco­gni­ti­on agree­ments with third countries.

4. High-risk AI systems that have alre­a­dy been sub­ject to a con­for­mi­ty assess­ment pro­ce­du­re shall under­go a new con­for­mi­ty assess­ment pro­ce­du­re in the event of a sub­stan­ti­al modi­fi­ca­ti­on, regard­less of whe­ther the modi­fi­ed system is inten­ded to be fur­ther dis­tri­bu­ted or con­ti­nues to be used by the cur­rent deployer.

For high-risk AI systems that con­ti­n­ue to learn after being pla­ced on the mar­ket or put into ser­vice, chan­ges to the high-risk AI system and its per­for­mance that have been pre-deter­mi­ned by the pro­vi­der at the moment of the initi­al con­for­mi­ty assess­ment and are part of the infor­ma­ti­on con­tai­ned in the tech­ni­cal docu­men­ta­ti­on refer­red to in point 2(f) of Annex IV, shall not con­sti­tu­te a sub­stan­ti­al modification.

(128) In line with the com­mon­ly estab­lished noti­on of sub­stan­ti­al modi­fi­ca­ti­on for pro­ducts regu­la­ted by Uni­on har­mo­ni­sa­ti­on legis­la­ti­on, it is appro­pria­te that when­ever a chan­ge occurs which may affect the com­pli­ance of a high-risk AI system with this Regu­la­ti­on (e.g. chan­ge of ope­ra­ting system or soft­ware archi­tec­tu­re), or when the inten­ded pur­po­se of the system chan­ges, that AI system should be con­side­red to be a new AI system which should under­go a new con­for­mi­ty assess­ment. Howe­ver, chan­ges occur­ring to the algo­rithm and the per­for­mance of AI systems which con­ti­n­ue to ‘learn’ after being pla­ced on the mar­ket or put into ser­vice, name­ly auto­ma­ti­cal­ly adap­ting how func­tions are car­ri­ed out, should not con­sti­tu­te a sub­stan­ti­al modi­fi­ca­ti­on, pro­vi­ded that tho­se chan­ges have been pre-deter­mi­ned by the pro­vi­der and asses­sed at the moment of the con­for­mi­ty assessment .

5. The Com­mis­si­on is empowered to adopt dele­ga­ted acts in accordance with Artic­le 97 in order to amend Anne­xes VI and VII by updating them in light of tech­ni­cal progress.

6. The Com­mis­si­on is empowered to adopt dele­ga­ted acts in accordance with Artic­le 97 in order to amend para­graphs 1 and 2 of this Artic­le in order to sub­ject high-risk AI systems refer­red to in points 2 to 8 of Annex III to the con­for­mi­ty assess­ment pro­ce­du­re refer­red to in Annex VII or parts the­reof. The Com­mis­si­on shall adopt such dele­ga­ted acts taking into account the effec­ti­ve­ness of the con­for­mi­ty assess­ment pro­ce­du­re based on inter­nal con­trol refer­red to in Annex VI in pre­ven­ting or mini­mi­sing the risks to health and safe­ty and pro­tec­tion of fun­da­men­tal rights posed by such systems, as well as the avai­la­bi­li­ty of ade­qua­te capa­ci­ties and resour­ces among noti­fi­ed bodies.

Artic­le 44 Certificates

1. Cer­ti­fi­ca­tes issued by noti­fi­ed bodies in accordance with Annex VII shall be drawn-up in a lan­guage which can be easi­ly under­s­tood by the rele­vant aut­ho­ri­ties in the Mem­ber Sta­te in which the noti­fi­ed body is established. 

2. Cer­ti­fi­ca­tes shall be valid for the peri­od they indi­ca­te, which shall not exce­ed five years for AI systems cover­ed by Annex I, and four years for AI systems cover­ed by Annex III. At the request of the pro­vi­der, the vali­di­ty of a cer­ti­fi­ca­te may be exten­ded for fur­ther peri­ods, each not exce­e­ding five years for AI systems cover­ed by Annex I, and four years for AI systems cover­ed by Annex III, based on a re-assess­ment in accordance with the appli­ca­ble con­for­mi­ty assess­ment pro­ce­du­res. Any sup­ple­ment to a cer­ti­fi­ca­te shall remain valid, pro­vi­ded that the cer­ti­fi­ca­te which it sup­ple­ments is valid.

3. Whe­re a noti­fi­ed body finds that an AI system no lon­ger meets the requi­re­ments set out in Sec­tion 2, it shall, taking account of the prin­ci­ple of pro­por­tio­na­li­ty, sus­pend or with­draw the cer­ti­fi­ca­te issued or impo­se rest­ric­tions on it, unless com­pli­ance with tho­se requi­re­ments is ensu­red by appro­pria­te cor­rec­ti­ve action taken by the pro­vi­der of the system within an appro­pria­te dead­line set by the noti­fi­ed body. The noti­fi­ed body shall give rea­sons for its decision.

An appeal pro­ce­du­re against decis­i­ons of the noti­fi­ed bodies, inclu­ding on con­for­mi­ty cer­ti­fi­ca­tes issued, shall be available. 

Artic­le 45 Infor­ma­ti­on obli­ga­ti­ons of noti­fi­ed bodies

1. Noti­fi­ed bodies shall inform the noti­fy­ing aut­ho­ri­ty of the following:

(a) any Uni­on tech­ni­cal docu­men­ta­ti­on assess­ment cer­ti­fi­ca­tes, any sup­ple­ments to tho­se cer­ti­fi­ca­tes, and any qua­li­ty manage­ment system appr­ovals issued in accordance with the requi­re­ments of Annex VII;

(b) any refu­sal, rest­ric­tion, sus­pen­si­on or with­dra­wal of a Uni­on tech­ni­cal docu­men­ta­ti­on assess­ment cer­ti­fi­ca­te or a qua­li­ty manage­ment system appr­oval issued in accordance with the requi­re­ments of Annex VII;

(c) any cir­cum­stances affec­ting the scope of or con­di­ti­ons for notification;

(d) any request for infor­ma­ti­on which they have recei­ved from mar­ket sur­veil­lan­ce aut­ho­ri­ties regar­ding con­for­mi­ty assess­ment activities;

(e) on request, con­for­mi­ty assess­ment acti­vi­ties per­for­med within the scope of their noti­fi­ca­ti­on and any other acti­vi­ty per­for­med, inclu­ding cross-bor­der acti­vi­ties and subcontracting.

2. Each noti­fi­ed body shall inform the other noti­fi­ed bodies of:

(a) qua­li­ty manage­ment system appr­ovals which it has refu­sed, sus­pen­ded or with­drawn, and, upon request, of qua­li­ty system appr­ovals which it has issued;

(b) Uni­on tech­ni­cal docu­men­ta­ti­on assess­ment cer­ti­fi­ca­tes or any sup­ple­ments the­re­to which it has refu­sed, with­drawn, sus­pen­ded or other­wi­se rest­ric­ted, and, upon request, of the cer­ti­fi­ca­tes and/or sup­ple­ments the­re­to which it has issued.

3. Each noti­fi­ed body shall pro­vi­de the other noti­fi­ed bodies car­ry­ing out simi­lar con­for­mi­ty assess­ment acti­vi­ties cove­ring the same types of AI systems with rele­vant infor­ma­ti­on on issues rela­ting to nega­ti­ve and, on request, posi­ti­ve con­for­mi­ty assess­ment results.

4. Noti­fi­ed bodies shall safe­guard the con­fi­den­tia­li­ty of the infor­ma­ti­on that they obtain, in accordance with Artic­le 78. 

Artic­le 46 Dero­ga­ti­on from con­for­mi­ty assess­ment procedure

1. By way of dero­ga­ti­on from Artic­le 43 and upon a duly justi­fi­ed request, any mar­ket sur­veil­lan­ce aut­ho­ri­ty may aut­ho­ri­se the pla­cing on the mar­ket or the put­ting into ser­vice of spe­ci­fic high-risk AI systems within the ter­ri­to­ry of the Mem­ber Sta­te con­cer­ned, for excep­tio­nal rea­sons of public secu­ri­ty or the pro­tec­tion of life and health of per­sons, envi­ron­men­tal pro­tec­tion or the pro­tec­tion of key indu­stri­al and infras­truc­tu­ral assets. That aut­ho­ri­sa­ti­on shall be for a limi­t­ed peri­od while the neces­sa­ry con­for­mi­ty assess­ment pro­ce­du­res are being car­ri­ed out, taking into account the excep­tio­nal rea­sons justi­fy­ing the dero­ga­ti­on. The com­ple­ti­on of tho­se pro­ce­du­res shall be under­ta­ken wit­hout undue delay.

2. In a duly justi­fi­ed situa­ti­on of urgen­cy for excep­tio­nal rea­sons of public secu­ri­ty or in the case of spe­ci­fic, sub­stan­ti­al and immi­nent thre­at to the life or phy­si­cal safe­ty of natu­ral per­sons, law-enforce­ment aut­ho­ri­ties or civil pro­tec­tion aut­ho­ri­ties may put a spe­ci­fic high-risk AI system into ser­vice wit­hout the aut­ho­ri­sa­ti­on refer­red to in para­graph 1, pro­vi­ded that such aut­ho­ri­sa­ti­on is reque­sted during or after the use wit­hout undue delay. If the aut­ho­ri­sa­ti­on refer­red to in para­graph 1 is refu­sed, the use of the high-risk AI system shall be stop­ped with imme­dia­te effect and all the results and out­puts of such use shall be imme­dia­te­ly discarded.

3. The aut­ho­ri­sa­ti­on refer­red to in para­graph 1 shall be issued only if the mar­ket sur­veil­lan­ce aut­ho­ri­ty con­clu­des that the high-risk AI system com­plies with the requi­re­ments of Sec­tion 2. The mar­ket sur­veil­lan­ce aut­ho­ri­ty shall inform the Com­mis­si­on and the other Mem­ber Sta­tes of any aut­ho­ri­sa­ti­on issued pur­su­ant to para­graphs 1 and 2. This obli­ga­ti­on shall not cover sen­si­ti­ve ope­ra­tio­nal data in rela­ti­on to the acti­vi­ties of law-enforce­ment authorities.

4. Whe­re, within 15 calen­dar days of rece­ipt of the infor­ma­ti­on refer­red to in para­graph 3, no objec­tion has been rai­sed by eit­her a Mem­ber Sta­te or the Com­mis­si­on in respect of an aut­ho­ri­sa­ti­on issued by a mar­ket sur­veil­lan­ce aut­ho­ri­ty of a Mem­ber Sta­te in accordance with para­graph 1, that aut­ho­ri­sa­ti­on shall be dee­med justified.

5. Whe­re, within 15 calen­dar days of rece­ipt of the noti­fi­ca­ti­on refer­red to in para­graph 3, objec­tions are rai­sed by a Mem­ber Sta­te against an aut­ho­ri­sa­ti­on issued by a mar­ket sur­veil­lan­ce aut­ho­ri­ty of ano­ther Mem­ber Sta­te, or whe­re the Com­mis­si­on con­siders the aut­ho­ri­sa­ti­on to be con­tra­ry to Uni­on law, or the con­clu­si­on of the Mem­ber Sta­tes regar­ding the com­pli­ance of the system as refer­red to in para­graph 3 to be unfoun­ded, the Com­mis­si­on shall, wit­hout delay, enter into con­sul­ta­ti­ons with the rele­vant Mem­ber Sta­te. The ope­ra­tors con­cer­ned shall be con­sul­ted and have the pos­si­bi­li­ty to pre­sent their views. Having regard the­re­to, the Com­mis­si­on shall deci­de whe­ther the aut­ho­ri­sa­ti­on is justi­fi­ed. The Com­mis­si­on shall address its decis­i­on to the Mem­ber Sta­te con­cer­ned and to the rele­vant operators.

6. Whe­re the Com­mis­si­on con­siders the aut­ho­ri­sa­ti­on unju­sti­fi­ed, it shall be with­drawn by the mar­ket sur­veil­lan­ce aut­ho­ri­ty of the Mem­ber Sta­te concerned.

7. For high-risk AI systems rela­ted to pro­ducts cover­ed by Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Sec­tion A of Annex I, only the dero­ga­ti­ons from the con­for­mi­ty assess­ment estab­lished in that Uni­on har­mo­ni­sa­ti­on legis­la­ti­on shall apply.

(130) Under cer­tain con­di­ti­ons, rapid avai­la­bi­li­ty of inno­va­ti­ve tech­no­lo­gies may be cru­cial for health and safe­ty of per­sons, the pro­tec­tion of the envi­ron­ment and cli­ma­te chan­ge and for socie­ty as a who­le. It is thus appro­pria­te that under excep­tio­nal rea­sons of public secu­ri­ty or pro­tec­tion of life and health of natu­ral per­sons, envi­ron­men­tal pro­tec­tion and the pro­tec­tion of key indu­stri­al and infras­truc­tu­ral assets, mar­ket sur­veil­lan­ce aut­ho­ri­ties could aut­ho­ri­se the pla­cing on the mar­ket or the put­ting into ser­vice of AI systems which have not under­go­ne a con­for­mi­ty assess­ment. In duly justi­fi­ed situa­tions, as pro­vi­ded for in this Regu­la­ti­on, law enforce­ment aut­ho­ri­ties or civil pro­tec­tion aut­ho­ri­ties may put a spe­ci­fic high-risk AI system into ser­vice wit­hout the aut­ho­ri­sa­ti­on of the mar­ket sur­veil­lan­ce aut­ho­ri­ty, pro­vi­ded that such aut­ho­ri­sa­ti­on is reque­sted during or after the use wit­hout undue delay.

Artic­le 47 EU decla­ra­ti­on of conformity

1. The pro­vi­der shall draw up a writ­ten machi­ne rea­da­ble, phy­si­cal or elec­tro­ni­cal­ly signed EU decla­ra­ti­on of con­for­mi­ty for each high-risk AI system, and ke

ep it at the dis­po­sal of the natio­nal com­pe­tent aut­ho­ri­ties for 10 years after the high-risk AI system has been pla­ced on the mar­ket or put into ser­vice. The EU decla­ra­ti­on of con­for­mi­ty shall iden­ti­fy the high-risk AI system for which it has been drawn up. A copy of the EU decla­ra­ti­on of con­for­mi­ty shall be sub­mit­ted to the rele­vant natio­nal com­pe­tent aut­ho­ri­ties upon request.

2. The EU decla­ra­ti­on of con­for­mi­ty shall sta­te that the high-risk AI system con­cer­ned meets the requi­re­ments set out in Sec­tion 2. The EU decla­ra­ti­on of con­for­mi­ty shall con­tain the infor­ma­ti­on set out in Annex V, and shall be trans­la­ted into a lan­guage that can be easi­ly under­s­tood by the natio­nal com­pe­tent aut­ho­ri­ties of the Mem­ber Sta­tes in which the high-risk AI system is pla­ced on the mar­ket or made available.

3. Whe­re high-risk AI systems are sub­ject to other Uni­on har­mo­ni­sa­ti­on legis­la­ti­on which also requi­res an EU decla­ra­ti­on of con­for­mi­ty, a sin­gle EU decla­ra­ti­on of con­for­mi­ty shall be drawn up in respect of all Uni­on law appli­ca­ble to the high-risk AI system. The decla­ra­ti­on shall con­tain all the infor­ma­ti­on requi­red to iden­ti­fy the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on to which the decla­ra­ti­on relates.

4. By dra­wing up the EU decla­ra­ti­on of con­for­mi­ty, the pro­vi­der shall assu­me respon­si­bi­li­ty for com­pli­ance with the requi­re­ments set out in Sec­tion 2. The pro­vi­der shall keep the EU decla­ra­ti­on of con­for­mi­ty up-to-date as appropriate.

5. The Com­mis­si­on is empowered to adopt dele­ga­ted acts in accordance with Artic­le 97 in order to amend Annex V by updating the con­tent of the EU decla­ra­ti­on of con­for­mi­ty set out in that Annex, in order to intro­du­ce ele­ments that beco­me neces­sa­ry in light of tech­ni­cal progress.

Artic­le 48 CE marking

1. The CE mar­king shall be sub­ject to the gene­ral prin­ci­ples set out in Artic­le 30 of Regu­la­ti­on (EC) No 765/2008.

2. For high-risk AI systems pro­vi­ded digi­tal­ly, a digi­tal CE mar­king shall be used, only if it can easi­ly be acce­s­sed via the inter­face from which that system is acce­s­sed or via an easi­ly acce­s­si­ble machi­ne-rea­da­ble code or other elec­tro­nic means.

3. The CE mar­king shall be affi­xed visi­bly, legi­bly and inde­li­bly for high-risk AI systems. Whe­re that is not pos­si­ble or not war­ran­ted on account of the natu­re of the high-risk AI system, it shall be affi­xed to the pack­a­ging or to the accom­pany­ing docu­men­ta­ti­on, as appropriate.

4. Whe­re appli­ca­ble, the CE mar­king shall be fol­lo­wed by the iden­ti­fi­ca­ti­on num­ber of the noti­fi­ed body respon­si­ble for the con­for­mi­ty assess­ment pro­ce­du­res set out in Artic­le 43. The iden­ti­fi­ca­ti­on num­ber of the noti­fi­ed body shall be affi­xed by the body its­elf or, under its ins­truc­tions, by the pro­vi­der or by the provider’s aut­ho­ri­sed repre­sen­ta­ti­ve. The iden­ti­fi­ca­ti­on num­ber shall also be indi­ca­ted in any pro­mo­tio­nal mate­ri­al which men­ti­ons that the high-risk AI system ful­fils the requi­re­ments for CE marking.

5. Whe­re high-risk AI systems are sub­ject to other Uni­on law which also pro­vi­des for the affixing of the CE mar­king, the CE mar­king shall indi­ca­te that the high-risk AI system also ful­fil the requi­re­ments of that other law.

(129) High-risk AI systems should bear the CE mar­king to indi­ca­te their con­for­mi­ty with this Regu­la­ti­on so that they can move free­ly within the inter­nal mar­ket. For high-risk AI systems embedded in a pro­duct, a phy­si­cal CE mar­king should be affi­xed, and may be com­ple­men­ted by a digi­tal CE mar­king. For high-risk AI systems only pro­vi­ded digi­tal­ly, a digi­tal CE mar­king should be used. Mem­ber Sta­tes should not crea­te unju­sti­fi­ed obs­ta­cles to the pla­cing on the mar­ket or the put­ting into ser­vice of high-risk AI systems that com­ply with the requi­re­ments laid down in this Regu­la­ti­on and bear the CE marking.

Artic­le 49 Registration

1. Befo­re pla­cing on the mar­ket or put­ting into ser­vice a high-risk AI system listed in Annex III, with the excep­ti­on of high-risk AI systems refer­red to in point 2 of Annex III, the pro­vi­der or, whe­re appli­ca­ble, the aut­ho­ri­sed repre­sen­ta­ti­ve shall regi­ster them­sel­ves and their system in the EU data­ba­se refer­red to in Artic­le 71.

2. Befo­re pla­cing on the mar­ket or put­ting into ser­vice an AI system for which the pro­vi­der has con­clu­ded that it is not high-risk accor­ding to Artic­le 6(3), that pro­vi­der or, whe­re appli­ca­ble, the aut­ho­ri­sed repre­sen­ta­ti­ve shall regi­ster them­sel­ves and that system in the EU data­ba­se refer­red to in Artic­le 71.

3. Befo­re put­ting into ser­vice or using a high-risk AI system listed in Annex III, with the excep­ti­on of high-risk AI systems listed in point 2 of Annex III, deployers that are public aut­ho­ri­ties, Uni­on insti­tu­ti­ons, bodies, offices or agen­ci­es or per­sons acting on their behalf shall regi­ster them­sel­ves, sel­ect the system and regi­ster its use in the EU data­ba­se refer­red to in Artic­le 71. 

4. For high-risk AI systems refer­red to in points 1, 6 and 7 of Annex III, in the are­as of law enforce­ment, migra­ti­on, asyl­um and bor­der con­trol manage­ment, the regi­stra­ti­on refer­red to in para­graphs 1, 2 and 3 of this Artic­le shall be in a secu­re non-public sec­tion of the EU data­ba­se refer­red to in Artic­le 71 and shall include only the fol­lo­wing infor­ma­ti­on, as appli­ca­ble, refer­red to in:

(a) sec­tion A, points 1 to 10, of Annex VIII, with the excep­ti­on of points 6, 8 and 9;

(b) Sec­tion B, points 1 to 5, and points 8 and 9 of Annex VIII;

(c) sec­tion C, points 1 to 3, of Annex VIII;

(d) points 1, 2, 3 and 5, of Annex IX.

Only the Com­mis­si­on and natio­nal aut­ho­ri­ties refer­red to in Artic­le 74(8) shall have access to the respec­ti­ve rest­ric­ted sec­tions of the EU data­ba­se listed in the first sub­pa­ra­graph of this paragraph.

5. High-risk AI systems refer­red to in point 2 of Annex III shall be regi­stered at natio­nal level. 

(131) In order to faci­li­ta­te the work of the Com­mis­si­on and the Mem­ber Sta­tes in the AI field as well as to increa­se the trans­pa­ren­cy towards the public, pro­vi­ders of high-risk AI systems other than tho­se rela­ted to pro­ducts fal­ling within the scope of rele­vant exi­sting Uni­on har­mo­ni­sa­ti­on legis­la­ti­on, as well as pro­vi­ders who con­sider that an AI system listed in the high-risk use cases in an annex to this Regu­la­ti­on is not high-risk on the basis of a dero­ga­ti­on, should be requi­red to regi­ster them­sel­ves and infor­ma­ti­on about their AI system in an EU data­ba­se, to be estab­lished and mana­ged by the Com­mis­si­on. Befo­re using an AI system listed in the high-risk use cases in an annex to this Regu­la­ti­on, deployers of high-risk AI systems that are public aut­ho­ri­ties, agen­ci­es or bodies, should regi­ster them­sel­ves in such data­ba­se and sel­ect the system that they envi­sa­ge to use.

Other deployers should be entit­led to do so vol­un­t­a­ri­ly. This sec­tion of the EU data­ba­se should be publicly acce­s­si­ble, free of char­ge, the infor­ma­ti­on should be easi­ly navigab­le, under­stan­da­ble and machi­ne-rea­da­ble. The EU data­ba­se should also be user-fri­end­ly, for exam­p­le by pro­vi­ding search func­tion­a­li­ties, inclu­ding through key­words, allo­wing the gene­ral public to find rele­vant infor­ma­ti­on to be sub­mit­ted upon the regi­stra­ti­on of high-risk AI systems and on the use case of high-risk AI systems, set out in an annex to this Regu­la­ti­on, to which the high-risk AI systems cor­re­spond. Any sub­stan­ti­al modi­fi­ca­ti­on of high-risk AI systems should also be regi­stered in the EU data­ba­se. For high-risk AI systems in the area of law enforce­ment, migra­ti­on, asyl­um and bor­der con­trol manage­ment, the regi­stra­ti­on obli­ga­ti­ons should be ful­fil­led in a secu­re non¬public sec­tion of the EU data­ba­se. Access to the secu­re non-public sec­tion should be strict­ly limi­t­ed to the Com­mis­si­on as well as to mar­ket sur­veil­lan­ce aut­ho­ri­ties with regard to their natio­nal sec­tion of that data­ba­se. High-risk AI systems in the area of cri­ti­cal infras­truc­tu­re should only be regi­stered at natio­nal level. The Com­mis­si­on should be the con­trol­ler of the EU data­ba­se, in accordance with Regu­la­ti­on (EU) 2018/1725. In order to ensu­re the full func­tion­a­li­ty of the EU data­ba­se, when deployed, the pro­ce­du­re for set­ting the data­ba­se should include the deve­lo­p­ment of func­tion­al spe­ci­fi­ca­ti­ons by the Com­mis­si­on and an inde­pen­dent audit report. The Com­mis­si­on should take into account cyber­se­cu­ri­ty risks when car­ry­ing out its tasks as data con­trol­ler on the EU data­ba­se. In order to maxi­mi­se the avai­la­bi­li­ty and use of the EU data­ba­se by the public, the EU data­ba­se, inclu­ding the infor­ma­ti­on made available through it, should com­ply with requi­re­ments under the Direc­ti­ve (EU) 2019/882.

Chap­ter IV Trans­pa­ren­cy obli­ga­ti­ons for pro­vi­ders and deployers of cer­tain AI systems

Artic­le 50 Trans­pa­ren­cy obli­ga­ti­ons for pro­vi­ders and deployers of cer­tain AI systems

1. Pro­vi­ders shall ensu­re that AI systems inten­ded to inter­act direct­ly with natu­ral per­sons are desi­gned and deve­lo­ped in such a way that the natu­ral per­sons con­cer­ned are infor­med that they are inter­ac­ting with an AI system, unless this is obvious from the point of view of a natu­ral per­son who is rea­son­ab­ly well-infor­med, obser­vant and cir­cum­spect, taking into account the cir­cum­stances and the con­text of use. This obli­ga­ti­on shall not app­ly to AI systems aut­ho­ri­sed by law to detect, pre­vent, inve­sti­ga­te or pro­se­cu­te cri­mi­nal offen­ces, sub­ject to appro­pria­te safe­guards for the rights and free­doms of third par­ties, unless tho­se systems are available for the public to report a cri­mi­nal offence. 

(132) Cer­tain AI systems inten­ded to inter­act with natu­ral per­sons or to gene­ra­te con­tent may pose spe­ci­fic risks of imper­so­na­ti­on or decep­ti­on irre­spec­ti­ve of whe­ther they qua­li­fy as high-risk or not. In cer­tain cir­cum­stances, the use of the­se systems should the­r­e­fo­re be sub­ject to spe­ci­fic trans­pa­ren­cy obli­ga­ti­ons wit­hout pre­ju­di­ce to the requi­re­ments and obli­ga­ti­ons for high-risk AI systems and sub­ject to tar­ge­ted excep­ti­ons to take into account the spe­cial need of law enforce­ment. In par­ti­cu­lar, natu­ral per­sons should be noti­fi­ed that they are inter­ac­ting with an AI system, unless this is obvious from the point of view of a natu­ral per­son who is rea­son­ab­ly well-infor­med, obser­vant and cir­cum­spect taking into account the cir­cum­stances and the con­text of use. When imple­men­ting that obli­ga­ti­on, the cha­rac­te­ri­stics of natu­ral per­sons belon­ging to vul­nerable groups due to their age or disa­bi­li­ty should be taken into account to the ext­ent the AI system is inten­ded to inter­act with tho­se groups as well. Moreo­ver, natu­ral per­sons should be noti­fi­ed when they are expo­sed to AI systems that, by pro­ce­s­sing their bio­me­tric data, can iden­ti­fy or infer the emo­ti­ons or inten­ti­ons of tho­se per­sons or assign them to spe­ci­fic cate­go­ries. Such spe­ci­fic cate­go­ries can rela­te to aspects such as sex, age, hair colour, eye colour, tat­toos, per­so­nal traits, eth­nic ori­gin, per­so­nal pre­fe­ren­ces and inte­rests. Such infor­ma­ti­on and noti­fi­ca­ti­ons should be pro­vi­ded in acce­s­si­ble for­mats for per­sons with disabilities.

2. Pro­vi­ders of AI systems, inclu­ding gene­ral-pur­po­se AI systems, gene­ra­ting syn­the­tic audio, image, video or text con­tent, shall ensu­re that the out­puts of the AI system are mark­ed in a machi­ne-rea­da­ble for­mat and detec­ta­ble as arti­fi­ci­al­ly gene­ra­ted or mani­pu­la­ted. Pro­vi­ders shall ensu­re their tech­ni­cal solu­ti­ons are effec­ti­ve, inter­ope­ra­ble, robust and relia­ble as far as this is tech­ni­cal­ly fea­si­ble, taking into account the spe­ci­fi­ci­ties and limi­ta­ti­ons of various types of con­tent, the costs of imple­men­ta­ti­on and the gene­ral­ly ack­now­led­ged sta­te of the art, as may be reflec­ted in rele­vant tech­ni­cal stan­dards. This obli­ga­ti­on shall not app­ly to the ext­ent the AI systems per­form an assi­sti­ve func­tion for stan­dard editing or do not sub­stan­ti­al­ly alter the input data pro­vi­ded by the deployer or the seman­tics the­reof, or whe­re aut­ho­ri­sed by law to detect, pre­vent, inve­sti­ga­te or pro­se­cu­te cri­mi­nal offences.

3. Deployers of an emo­ti­on reco­gni­ti­on system or a bio­me­tric cate­go­ri­sa­ti­on system shall inform the natu­ral per­sons expo­sed the­re­to of the ope­ra­ti­on of the system, and shall pro­cess the per­so­nal data in accordance with Regu­la­ti­ons (EU) 2016/679 and (EU) 2018/1725 and Direc­ti­ve (EU) 2016/680, as appli­ca­ble. This obli­ga­ti­on shall not app­ly to AI systems used for bio­me­tric cate­go­ri­sa­ti­on and emo­ti­on reco­gni­ti­on, which are per­mit­ted by law to detect, pre­vent or inve­sti­ga­te cri­mi­nal offen­ces, sub­ject to appro­pria­te safe­guards for the rights and free­doms of third par­ties, and in accordance with Uni­on law.

(94) Any pro­ce­s­sing of bio­me­tric data invol­ved in the use of AI systems for bio­me­tric iden­ti­fi­ca­ti­on for the pur­po­se of law enforce­ment needs to com­ply with Artic­le 10 of Direc­ti­ve (EU) 2016/680, that allo­ws such pro­ce­s­sing only whe­re strict­ly neces­sa­ry, sub­ject to appro­pria­te safe­guards for the rights and free­doms of the data sub­ject, and whe­re aut­ho­ri­sed by Uni­on or Mem­ber Sta­te law. Such use, when aut­ho­ri­sed, also needs to respect the prin­ci­ples laid down in Artic­le 4 (1) of Direc­ti­ve (EU) 2016/680 inclu­ding lawful­ness, fair­ness and trans­pa­ren­cy, pur­po­se limi­ta­ti­on, accu­ra­cy and sto­rage limitation.

4. Deployers of an AI system that gene­ra­tes or mani­pu­la­tes image, audio or video con­tent con­sti­tu­ting a deep fake, shall dis­c­lo­se that the con­tent has been arti­fi­ci­al­ly gene­ra­ted or mani­pu­la­ted. This obli­ga­ti­on shall not app­ly whe­re the use is aut­ho­ri­sed by law to detect, pre­vent, inve­sti­ga­te or pro­se­cu­te cri­mi­nal offence. Whe­re the con­tent forms part of an evi­dent­ly artis­tic, crea­ti­ve, sati­ri­cal, fic­tion­al or ana­log­ous work or pro­gram­me, the trans­pa­ren­cy obli­ga­ti­ons set out in this para­graph are limi­t­ed to dis­clo­sure of the exi­stence of such gene­ra­ted or mani­pu­la­ted con­tent in an appro­pria­te man­ner that does not ham­per the dis­play or enjoy­ment of the work.

Deployers of an AI system that gene­ra­tes or mani­pu­la­tes text which is published with the pur­po­se of informing the public on mat­ters of public inte­rest shall dis­c­lo­se that the text has been arti­fi­ci­al­ly gene­ra­ted or mani­pu­la­ted. This obli­ga­ti­on shall not app­ly whe­re the use is aut­ho­ri­sed by law to detect, pre­vent, inve­sti­ga­te or pro­se­cu­te cri­mi­nal offen­ces or whe­re the AI-gene­ra­ted con­tent has under­go­ne a pro­cess of human review or edi­to­ri­al con­trol and whe­re a natu­ral or legal per­son holds edi­to­ri­al respon­si­bi­li­ty for the publi­ca­ti­on of the content. 

(133) A varie­ty of AI systems can gene­ra­te lar­ge quan­ti­ties of syn­the­tic con­tent that beco­mes incre­a­sing­ly hard for humans to distin­gu­ish from human-gene­ra­ted and authen­tic con­tent. The wide avai­la­bi­li­ty and incre­a­sing capa­bi­li­ties of tho­se systems have a signi­fi­cant impact on the inte­gri­ty and trust in the infor­ma­ti­on eco­sy­stem, rai­sing new risks of mis­in­for­ma­ti­on and mani­pu­la­ti­on at sca­le, fraud, imper­so­na­ti­on and con­su­mer decep­ti­on. In light of tho­se impacts, the fast tech­no­lo­gi­cal pace and the need for new methods and tech­ni­ques to trace ori­gin of infor­ma­ti­on, it is appro­pria­te to requi­re pro­vi­ders of tho­se systems to embed tech­ni­cal solu­ti­ons that enable mar­king in a machi­ne rea­da­ble for­mat and detec­tion that the out­put has been gene­ra­ted or mani­pu­la­ted by an AI system and not a human. Such tech­ni­ques and methods should be suf­fi­ci­ent­ly relia­ble, inter­ope­ra­ble, effec­ti­ve and robust as far as this is tech­ni­cal­ly fea­si­ble, taking into account available tech­ni­ques or a com­bi­na­ti­on of such tech­ni­ques, such as water­marks, meta­da­ta iden­ti­fi­ca­ti­ons, cryp­to­gra­phic methods for pro­ving pro­ven­an­ce and authen­ti­ci­ty of con­tent, log­ging methods, fin­ger­prints or other tech­ni­ques, as may be appro­pria­te. When imple­men­ting this obli­ga­ti­on, pro­vi­ders should also take into account the spe­ci­fi­ci­ties and the limi­ta­ti­ons of the dif­fe­rent types of con­tent and the rele­vant tech­no­lo­gi­cal and mar­ket deve­lo­p­ments in the field, as reflec­ted in the gene­ral­ly ack­now­led­ged sta­te of the art. Such tech­ni­ques and methods can be imple­men­ted at the level of the AI system or at the level of the AI model, inclu­ding gene­ral-pur­po­se AI models gene­ra­ting con­tent, ther­eby faci­li­ta­ting ful­film­ent of this obli­ga­ti­on by the down­stream pro­vi­der of the AI system. To remain pro­por­tio­na­te, it is appro­pria­te to envi­sa­ge that this mar­king obli­ga­ti­on should not cover AI systems per­forming pri­ma­ri­ly an assi­sti­ve func­tion for stan­dard editing or AI systems not sub­stan­ti­al­ly alte­ring the input data pro­vi­ded by the deployer or the seman­tics thereof.

(134) Fur­ther to the tech­ni­cal solu­ti­ons employed by the pro­vi­ders of the AI system, deployers who use an AI system to gene­ra­te or mani­pu­la­te image, audio or video con­tent that app­re­cia­bly resem­bles exi­sting per­sons, objects, places, enti­ties or events and would fal­se­ly appear to a per­son to be authen­tic or truthful (deep fakes), should also cle­ar­ly and distin­gu­is­ha­b­ly dis­c­lo­se that the con­tent has been arti­fi­ci­al­ly crea­ted or mani­pu­la­ted by label­ling the AI out­put accor­din­gly and dis­clo­sing its arti­fi­ci­al ori­gin. Com­pli­ance with this trans­pa­ren­cy obli­ga­ti­on should not be inter­pre­ted as indi­ca­ting that the use of the AI system or its out­put impe­des the right to free­dom of expres­si­on and the right to free­dom of the arts and sci­en­ces gua­ran­teed in the Char­ter, in par­ti­cu­lar whe­re the con­tent is part of an evi­dent­ly crea­ti­ve, sati­ri­cal, artis­tic, fic­tion­al or ana­log­ous work or pro­gram­me, sub­ject to appro­pria­te safe­guards for the rights and free­doms of third par­ties. In tho­se cases, the trans­pa­ren­cy obli­ga­ti­on for deep fakes set out in this Regu­la­ti­on is limi­t­ed to dis­clo­sure of the exi­stence of such gene­ra­ted or mani­pu­la­ted con­tent in an appro­pria­te man­ner that does not ham­per the dis­play or enjoy­ment of the work, inclu­ding its nor­mal explo­ita­ti­on and use, while main­tai­ning the uti­li­ty and qua­li­ty of the work. In addi­ti­on, it is also appro­pria­te to envi­sa­ge a simi­lar dis­clo­sure obli­ga­ti­on in rela­ti­on to AI-gene­ra­ted or mani­pu­la­ted text to the ext­ent it is published with the pur­po­se of informing the public on mat­ters of public inte­rest unless the AI- gene­ra­ted con­tent has under­go­ne a pro­cess of human review or edi­to­ri­al con­trol and a natu­ral or legal per­son holds edi­to­ri­al respon­si­bi­li­ty for the publi­ca­ti­on of the content.

5. The infor­ma­ti­on refer­red to in para­graphs 1 to 4 shall be pro­vi­ded to the natu­ral per­sons con­cer­ned in a clear and distin­gu­is­ha­ble man­ner at the latest at the time of the first inter­ac­tion or expo­sure. The infor­ma­ti­on shall con­form to the appli­ca­ble acce­s­si­bi­li­ty requirements.

(136) The obli­ga­ti­ons pla­ced on pro­vi­ders and deployers of cer­tain AI systems in this Regu­la­ti­on to enable the detec­tion and dis­clo­sure that the out­puts of tho­se systems are arti­fi­ci­al­ly gene­ra­ted or mani­pu­la­ted are par­ti­cu­lar­ly rele­vant to faci­li­ta­te the effec­ti­ve imple­men­ta­ti­on of Regu­la­ti­on (EU) 2022/2065. This applies in par­ti­cu­lar as regards the obli­ga­ti­ons of pro­vi­ders of very lar­ge online plat­forms or very lar­ge online search engi­nes to iden­ti­fy and miti­ga­te syste­mic risks that may ari­se from the dis­se­mi­na­ti­on of con­tent that has been arti­fi­ci­al­ly gene­ra­ted or mani­pu­la­ted, in par­ti­cu­lar the risk of the actu­al or fore­seeable nega­ti­ve effects on demo­cra­tic pro­ce­s­ses, civic dis­cour­se and elec­to­ral pro­ce­s­ses, inclu­ding through dis­in­for­ma­ti­on. The requi­re­ment to label con­tent gene­ra­ted by AI systems under this Regu­la­ti­on is wit­hout pre­ju­di­ce to the obli­ga­ti­on in Artic­le 16(6) of Regu­la­ti­on (EU) 2022/2065 for pro­vi­ders of hosting ser­vices to pro­cess noti­ces on ille­gal con­tent recei­ved pur­su­ant to Artic­le 16(1) of that Regu­la­ti­on and should not influence the assess­ment and the decis­i­on on the ille­ga­li­ty of the spe­ci­fic con­tent. That assess­ment should be per­for­med sole­ly with refe­rence to the rules gover­ning the lega­li­ty of the content.

6. Para­graphs 1 to 4 shall not affect the requi­re­ments and obli­ga­ti­ons set out in Chap­ter III, and shall be wit­hout pre­ju­di­ce to other trans­pa­ren­cy obli­ga­ti­ons laid down in Uni­on or natio­nal law for deployers of AI systems.

(137) Com­pli­ance with the trans­pa­ren­cy obli­ga­ti­ons for the AI systems cover­ed by this Regu­la­ti­on should not be inter­pre­ted as indi­ca­ting that the use of the AI system or its out­put is lawful under this Regu­la­ti­on or other Uni­on and Mem­ber Sta­te law and should be wit­hout pre­ju­di­ce to other trans­pa­ren­cy obli­ga­ti­ons for deployers of AI systems laid down in Uni­on or natio­nal law.

7. The AI Office shall encou­ra­ge and faci­li­ta­te the dra­wing up of codes of prac­ti­ce at Uni­on level to faci­li­ta­te the effec­ti­ve imple­men­ta­ti­on of the obli­ga­ti­ons regar­ding the detec­tion and label­ling of arti­fi­ci­al­ly gene­ra­ted or mani­pu­la­ted con­tent. The Com­mis­si­on may adopt imple­men­ting acts to appro­ve tho­se codes of prac­ti­ce in accordance with the pro­ce­du­re laid down in Artic­le 56 (6). If it deems the code is not ade­qua­te, the Com­mis­si­on may adopt an imple­men­ting act spe­ci­fy­ing com­mon rules for the imple­men­ta­ti­on of tho­se obli­ga­ti­ons in accordance with the exami­na­ti­on pro­ce­du­re laid down in Artic­le 98(2).

(135) Wit­hout pre­ju­di­ce to the man­da­to­ry natu­re and full appli­ca­bi­li­ty of the trans­pa­ren­cy obli­ga­ti­ons, the Com­mis­si­on may also encou­ra­ge and faci­li­ta­te the dra­wing up of codes of prac­ti­ce at Uni­on level to faci­li­ta­te the effec­ti­ve imple­men­ta­ti­on of the obli­ga­ti­ons regar­ding the detec­tion and label­ling of arti­fi­ci­al­ly gene­ra­ted or mani­pu­la­ted con­tent, inclu­ding to sup­port prac­ti­cal arran­ge­ments for making, as appro­pria­te, the detec­tion mecha­nisms acce­s­si­ble and faci­li­ta­ting coope­ra­ti­on with other actors along the value chain, dis­se­mi­na­ting con­tent or checking its authen­ti­ci­ty and pro­ven­an­ce to enable the public to effec­tively distin­gu­ish AI-gene­ra­ted content. 

Chap­ter V Gene­ral-pur­po­se AI models

Sec­tion 1 Clas­si­fi­ca­ti­on Rules

Artic­le 51 Clas­si­fi­ca­ti­on of gene­ral-pur­po­se AI models as gene­ral-pur­po­se AI models with syste­mic risk

1. A gene­ral-pur­po­se AI model shall be clas­si­fi­ed as a gene­ral-pur­po­se AI model with syste­mic risk if it meets any of the fol­lo­wing conditions:

(a) it has high impact capa­bi­li­ties eva­lua­ted on the basis of appro­pria­te tech­ni­cal tools and metho­do­lo­gies, inclu­ding indi­ca­tors and benchmarks;

(b) based on a decis­i­on of the Com­mis­si­on, ex offi­cio or fol­lo­wing a qua­li­fi­ed alert from the sci­en­ti­fic panel, it has capa­bi­li­ties or an impact equi­va­lent to tho­se set out in point (a) having regard to the cri­te­ria set out in Annex XIII.

2. A gene­ral-pur­po­se AI model shall be pre­su­med to have high impact capa­bi­li­ties pur­su­ant to para­graph 1, point (a), when the cumu­la­ti­ve amount of com­pu­ta­ti­on used for its trai­ning mea­su­red in floa­ting point ope­ra­ti­ons is grea­ter than 1025.

3. The Com­mis­si­on shall adopt dele­ga­ted acts in accordance with Artic­le 97 to amend the thres­holds listed in para­graphs 1 and 2 of this Artic­le, as well as to sup­ple­ment bench­marks and indi­ca­tors in light of evol­ving tech­no­lo­gi­cal deve­lo­p­ments, such as algo­rith­mic impro­ve­ments or increa­sed hard­ware effi­ci­en­cy, when neces­sa­ry, for the­se thres­holds to reflect the sta­te of the art.

(110) Gene­ral-pur­po­se AI models could pose syste­mic risks which include, but are not limi­t­ed to, any actu­al or rea­son­ab­ly fore­seeable nega­ti­ve effects in rela­ti­on to major acci­dents, dis­rup­ti­ons of cri­ti­cal sec­tors and serious con­se­quen­ces to public health and safe­ty; any actu­al or rea­son­ab­ly fore­seeable nega­ti­ve effects on demo­cra­tic pro­ce­s­ses, public and eco­no­mic secu­ri­ty; the dis­se­mi­na­ti­on of ille­gal, fal­se, or dis­cri­mi­na­to­ry con­tent. Syste­mic risks should be under­s­tood to increa­se with model capa­bi­li­ties and model reach, can ari­se along the enti­re life­cy­cle of the model, and are influen­ced by con­di­ti­ons of misu­se, model relia­bi­li­ty, model fair­ness and model secu­ri­ty, the level of auto­no­my of the model, its access to tools, novel or com­bi­ned moda­li­ties, release and dis­tri­bu­ti­on stra­te­gies, the poten­ti­al to remo­ve guar­drails and other fac­tors. In par­ti­cu­lar, inter­na­tio­nal approa­ches have so far iden­ti­fi­ed the need to pay atten­ti­on to risks from poten­ti­al inten­tio­nal misu­se or unin­ten­ded issues of con­trol rela­ting to ali­gnment with human intent; che­mical, bio­lo­gi­cal, radio­lo­gi­cal, and nuclear risks, such as the ways in which bar­riers to ent­ry can be lowe­red, inclu­ding for wea­pons deve­lo­p­ment, design acqui­si­ti­on, or use; offen­si­ve cyber capa­bi­li­ties, such as the ways in vul­nerabi­li­ty dis­co­very, explo­ita­ti­on, or ope­ra­tio­nal use can be enab­led; the effects of inter­ac­tion and tool use, inclu­ding for exam­p­le the capa­ci­ty to con­trol phy­si­cal systems and inter­fe­re with cri­ti­cal infras­truc­tu­re; risks from models of making copies of them­sel­ves or ‘self-repli­ca­ting’ or trai­ning other models; the ways in which models can give rise to harmful bias and dis­cri­mi­na­ti­on with risks to indi­vi­du­als, com­mu­ni­ties or socie­ties; the faci­li­ta­ti­on of dis­in­for­ma­ti­on or har­ming pri­va­cy with thre­ats to demo­cra­tic values and human rights; risk that a par­ti­cu­lar event could lead to a chain reac­tion with con­sidera­ble nega­ti­ve effects that could affect up to an enti­re city, an enti­re domain acti­vi­ty or an enti­re community.

(111) It is appro­pria­te to estab­lish a metho­do­lo­gy for the clas­si­fi­ca­ti­on of gene­ral-pur­po­se AI models as gene­ral-pur­po­se AI model with syste­mic risks. Sin­ce syste­mic risks result from par­ti­cu­lar­ly high capa­bi­li­ties, a gene­ral-pur­po­se AI model should be con­side­red to pre­sent syste­mic risks if it has high-impact capa­bi­li­ties, eva­lua­ted on the basis of appro­pria­te tech­ni­cal tools and metho­do­lo­gies, or signi­fi­cant impact on the inter­nal mar­ket due to its reach. High-impact capa­bi­li­ties in gene­ral-pur­po­se AI models means capa­bi­li­ties that match or exce­ed the capa­bi­li­ties recor­ded in the most advan­ced general¬purpose AI models. The full ran­ge of capa­bi­li­ties in a model could be bet­ter under­s­tood after its pla­cing on the mar­ket or when deployers inter­act with the model. Accor­ding to the sta­te of the art at the time of ent­ry into force of this Regu­la­ti­on, the cumu­la­ti­ve amount of com­pu­ta­ti­on used for the trai­ning of the gene­ral-pur­po­se AI model mea­su­red in floa­ting point ope­ra­ti­ons is one of the rele­vant appro­xi­ma­ti­ons for model capa­bi­li­ties. The cumu­la­ti­ve amount of com­pu­ta­ti­on used for trai­ning inclu­des the com­pu­ta­ti­on used across the acti­vi­ties and methods that are inten­ded to enhan­ce the capa­bi­li­ties of the model pri­or to deployment, such as pre-trai­ning, syn­the­tic data gene­ra­ti­on and fine¬tuning. The­r­e­fo­re, an initi­al thres­hold of floa­ting point ope­ra­ti­ons should be set, which, if met by a gene­ral-pur­po­se AI model, leads to a pre­sump­ti­on that the model is a gene­ral-pur­po­se AI model with syste­mic risks. This thres­hold should be adju­sted over time to reflect tech­no­lo­gi­cal and indu­stri­al chan­ges, such as algo­rith­mic impro­ve­ments or increa­sed hard­ware effi­ci­en­cy, and should be sup­ple­men­ted with bench­marks and indi­ca­tors for model capability.

To inform this, the AI Office should enga­ge with the sci­en­ti­fic com­mu­ni­ty, indu­stry, civil socie­ty and other experts. Thres­holds, as well as tools and bench­marks for the assess­ment of high-impact capa­bi­li­ties, should be strong pre­dic­tors of gene­ra­li­ty, its capa­bi­li­ties and asso­cia­ted syste­mic risk of gene­ral-pur­po­se AI models, and could take into account the way the model will be pla­ced on the mar­ket or the num­ber of users it may affect. To com­ple­ment this system, the­re should be a pos­si­bi­li­ty for the Com­mis­si­on to take indi­vi­du­al decis­i­ons desi­gna­ting a gene­ral-pur­po­se AI model as a gene­ral-pur­po­se AI model with syste­mic risk if it is found that such model has capa­bi­li­ties or an impact equi­va­lent to tho­se cap­tu­red by the set thres­hold. That decis­i­on should be taken on the basis of an over­all assess­ment of the cri­te­ria for the desi­gna­ti­on of a gene­ral-pur­po­se AI model with syste­mic risk set out in an annex to this Regu­la­ti­on, such as qua­li­ty or size of the trai­ning data set, num­ber of busi­ness and end users, its input and out­put moda­li­ties, its level of auto­no­my and sca­la­bi­li­ty, or the tools it has access to.

Upon a rea­so­ned request of a pro­vi­der who­se model has been desi­gna­ted as a gene­ral-pur­po­se AI model with syste­mic risk, the Com­mis­si­on should take the request into account and may deci­de to reas­sess whe­ther the gene­ral-pur­po­se AI model can still be con­side­red to pre­sent syste­mic risks.

Artic­le 52 Procedure

1. Whe­re a gene­ral-pur­po­se AI model meets the con­di­ti­on refer­red to in Artic­le 51(1), point (a), the rele­vant pro­vi­der shall noti­fy the Com­mis­si­on wit­hout delay and in any event within two weeks after that requi­re­ment is met or it beco­mes known that it will be met. That noti­fi­ca­ti­on shall include the infor­ma­ti­on neces­sa­ry to demon­stra­te that the rele­vant requi­re­ment has been met. If the Com­mis­si­on beco­mes awa­re of a gene­ral-pur­po­se AI model pre­sen­ting syste­mic risks of which it has not been noti­fi­ed, it may deci­de to desi­gna­te it as a model with syste­mic risk.

2. The pro­vi­der of a gene­ral-pur­po­se AI model that meets the con­di­ti­on refer­red to in Artic­le 51(1), point (a), may pre­sent, with its noti­fi­ca­ti­on, suf­fi­ci­ent­ly sub­stan­tia­ted argu­ments to demon­stra­te that, excep­tio­nal­ly, alt­hough it meets that requi­re­ment, the gene­ral-pur­po­se AI model does not pre­sent, due to its spe­ci­fic cha­rac­te­ri­stics, syste­mic risks and the­r­e­fo­re should not be clas­si­fi­ed as a gene­ral-pur­po­se AI model with syste­mic risk.

3. Whe­re the Com­mis­si­on con­clu­des that the argu­ments sub­mit­ted pur­su­ant to para­graph 2 are not suf­fi­ci­ent­ly sub­stan­tia­ted and the rele­vant pro­vi­der was not able to demon­stra­te that the gene­ral-pur­po­se AI model does not pre­sent, due to its spe­ci­fic cha­rac­te­ri­stics, syste­mic risks, it shall reject tho­se argu­ments, and the gene­ral-pur­po­se AI model shall be con­side­red to be a gene­ral-pur­po­se AI model with syste­mic risk.

4. The Com­mis­si­on may desi­gna­te a gene­ral-pur­po­se AI model as pre­sen­ting syste­mic risks, ex offi­cio or fol­lo­wing a qua­li­fi­ed alert from the sci­en­ti­fic panel pur­su­ant to Artic­le 90(1), point (a), on the basis of cri­te­ria set out in Annex XIII.

The Com­mis­si­on is empowered to adopt dele­ga­ted acts in accordance with Artic­le 97 in order to amend Annex XIII by spe­ci­fy­ing and updating the cri­te­ria set out in that Annex.

5. Upon a rea­so­ned request of a pro­vi­der who­se model has been desi­gna­ted as a gene­ral-pur­po­se AI model with syste­mic risk pur­su­ant to para­graph 4, the Com­mis­si­on shall take the request into account and may deci­de to reas­sess whe­ther the gene­ral-pur­po­se AI model can still be con­side­red to pre­sent syste­mic risks on the basis of the cri­te­ria set out in Annex XIII. Such a request shall con­tain objec­ti­ve, detail­ed and new rea­sons that have ari­sen sin­ce the desi­gna­ti­on decis­i­on. Pro­vi­ders may request reas­sess­ment at the ear­liest six months after the desi­gna­ti­on decis­i­on. Whe­re the Com­mis­si­on, fol­lo­wing its reas­sess­ment, deci­des to main­tain the desi­gna­ti­on as a gene­ral-pur­po­se AI model with syste­mic risk, pro­vi­ders may request reas­sess­ment at the ear­liest six months after that decision.

6. The Com­mis­si­on shall ensu­re that a list of gene­ral-pur­po­se AI models with syste­mic risk is published and shall keep that list up to date, wit­hout pre­ju­di­ce to the need to obser­ve and pro­tect intellec­tu­al pro­per­ty rights and con­fi­den­ti­al busi­ness infor­ma­ti­on or trade secrets in accordance with Uni­on and natio­nal law. 

(112) It is also neces­sa­ry to cla­ri­fy a pro­ce­du­re for the clas­si­fi­ca­ti­on of a gene­ral-pur­po­se AI model with syste­mic risks. A gene­ral-pur­po­se AI model that meets the appli­ca­ble thres­hold for high-impact capa­bi­li­ties should be pre­su­med to be a gene­ral-pur­po­se AI models with syste­mic risk. The pro­vi­der should noti­fy the AI Office at the latest two weeks after the requi­re­ments are met or it beco­mes known that a gene­ral-pur­po­se AI model will meet the requi­re­ments that lead to the pre­sump­ti­on. This is espe­ci­al­ly rele­vant in rela­ti­on to the thres­hold of floa­ting point ope­ra­ti­ons becau­se trai­ning of general¬purpose AI models takes con­sidera­ble plan­ning which inclu­des the upfront allo­ca­ti­on of com­pu­te resour­ces and, the­r­e­fo­re, pro­vi­ders of gene­ral-pur­po­se AI models are able to know if their model would meet the thres­hold befo­re the trai­ning is com­ple­ted. In the con­text of that noti­fi­ca­ti­on, the pro­vi­der should be able to demon­stra­te that, becau­se of its spe­ci­fic cha­rac­te­ri­stics, a gene­ral-pur­po­se AI model excep­tio­nal­ly does not pre­sent syste­mic risks, and that it thus should not be clas­si­fi­ed as a gene­ral-pur­po­se AI model with syste­mic risks. That infor­ma­ti­on is valuable for the AI Office to anti­ci­pa­te the pla­cing on the mar­ket of gene­ral-pur­po­se AI models with syste­mic risks and the pro­vi­ders can start to enga­ge with the AI Office ear­ly on. That infor­ma­ti­on is espe­ci­al­ly important with regard to gene­ral-pur­po­se AI models that are plan­ned to be released as open-source, given that, after the open-source model release, neces­sa­ry mea­su­res to ensu­re com­pli­ance with the obli­ga­ti­ons under this Regu­la­ti­on may be more dif­fi­cult to implement.

(113) If the Com­mis­si­on beco­mes awa­re of the fact that a gene­ral-pur­po­se AI model meets the requi­re­ments to clas­si­fy as a gene­ral-pur­po­se AI model with syste­mic risk, which pre­vious­ly had eit­her not been known or of which the rele­vant pro­vi­der has fai­led to noti­fy the Com­mis­si­on, the Com­mis­si­on should be empowered to desi­gna­te it so. A system of qua­li­fi­ed alerts should ensu­re that the AI Office is made awa­re by the sci­en­ti­fic panel of gene­ral-pur­po­se AI models that should pos­si­bly be clas­si­fi­ed as gene­ral-pur­po­se AI models with syste­mic risk, in addi­ti­on to the moni­to­ring acti­vi­ties of the AI Office.

Sec­tion 2 Obli­ga­ti­ons For Pro­vi­ders Of Gene­ral Pur­po­se AI Models

Artic­le 53 Obli­ga­ti­ons for pro­vi­ders of gene­ral-pur­po­se AI models

(101) Pro­vi­ders of gene­ral-pur­po­se AI models have a par­ti­cu­lar role and respon­si­bi­li­ty along the AI value chain, as the models they pro­vi­de may form the basis for a ran­ge of down­stream systems, often pro­vi­ded by down­stream pro­vi­ders that neces­si­ta­te a good under­stan­ding of the models and their capa­bi­li­ties, both to enable the inte­gra­ti­on of such models into their pro­ducts, and to ful­fil their obli­ga­ti­ons under this or other regu­la­ti­ons. The­r­e­fo­re, pro­por­tio­na­te trans­pa­ren­cy mea­su­res should be laid down, inclu­ding the dra­wing up and kee­ping up to date of docu­men­ta­ti­on, and the pro­vi­si­on of infor­ma­ti­on on the gene­ral-pur­po­se AI model for its usa­ge by the down­stream pro­vi­ders. Tech­ni­cal docu­men­ta­ti­on should be pre­pared and kept up to date by the gene­ral-pur­po­se AI model pro­vi­der for the pur­po­se of making it available, upon request, to the AI Office and the natio­nal com­pe­tent aut­ho­ri­ties. The mini­mal set of ele­ments to be inclu­ded in such docu­men­ta­ti­on should be set out in spe­ci­fic anne­xes to this Regu­la­ti­on. The Com­mis­si­on should be empowered to amend tho­se anne­xes by means of dele­ga­ted acts in light of evol­ving tech­no­lo­gi­cal developments.

(109) Com­pli­ance with the obli­ga­ti­ons appli­ca­ble to the pro­vi­ders of gene­ral-pur­po­se AI models should be com­men­su­ra­te and pro­por­tio­na­te to the type of model pro­vi­der, exclu­ding the need for com­pli­ance for per­sons who deve­lop or use models for non-pro­fes­sio­nal or sci­en­ti­fic rese­arch pur­po­ses, who should nevert­hel­ess be encou­ra­ged to vol­un­t­a­ri­ly com­ply with the­se requi­re­ments. Wit­hout pre­ju­di­ce to Uni­on copy­right law, com­pli­ance with tho­se obli­ga­ti­ons should take due account of the size of the pro­vi­der and allow sim­pli­fi­ed ways of com­pli­ance for SMEs, inclu­ding start-ups, that should not repre­sent an exce­s­si­ve cost and not dis­cou­ra­ge the use of such models. In the case of a modi­fi­ca­ti­on or fine-tuning of a model, the obli­ga­ti­ons for pro­vi­ders of gene­ral-pur­po­se AI models should be limi­t­ed to that modi­fi­ca­ti­on or fine-tuning, for exam­p­le by com­ple­men­ting the alre­a­dy exi­sting tech­ni­cal docu­men­ta­ti­on with infor­ma­ti­on on the modi­fi­ca­ti­ons, inclu­ding new trai­ning data sources, as a means to com­ply with the value chain obli­ga­ti­ons pro­vi­ded in this Regulation. 

1. Pro­vi­ders of gene­ral-pur­po­se AI models shall:

(a) draw up and keep up-to-date the tech­ni­cal docu­men­ta­ti­on of the model, inclu­ding its trai­ning and test­ing pro­cess and the results of its eva­lua­ti­on, which shall con­tain, at a mini­mum, the infor­ma­ti­on set out in Annex XI for the pur­po­se of pro­vi­ding it, upon request, to the AI Office and the natio­nal com­pe­tent authorities;

(b) draw up, keep up-to-date and make available infor­ma­ti­on and docu­men­ta­ti­on to pro­vi­ders of AI systems who intend to inte­gra­te the gene­ral-pur­po­se AI model into their AI systems. Wit­hout pre­ju­di­ce to the need to obser­ve and pro­tect intellec­tu­al pro­per­ty rights and con­fi­den­ti­al busi­ness infor­ma­ti­on or trade secrets in accordance with Uni­on and natio­nal law, the infor­ma­ti­on and docu­men­ta­ti­on shall:

(i) enable pro­vi­ders of AI systems to have a good under­stan­ding of the capa­bi­li­ties and limi­ta­ti­ons of the gene­ral-pur­po­se AI model and to com­ply with their obli­ga­ti­ons pur­su­ant to this Regu­la­ti­on; and

(ii) con­tain, at a mini­mum, the ele­ments set out in Annex XII;

(c) put in place a poli­cy to com­ply with Uni­on law on copy­right and rela­ted rights, and in par­ti­cu­lar to iden­ti­fy and com­ply with, inclu­ding through sta­te-of-the-art tech­no­lo­gies, a reser­va­ti­on of rights expres­sed pur­su­ant to Artic­le 4(3) of Direc­ti­ve (EU) 2019/790;

(105) Gene­ral-pur­po­se AI models, in par­ti­cu­lar lar­ge gene­ra­ti­ve AI models, capa­ble of gene­ra­ting text, images, and other con­tent, pre­sent uni­que inno­va­ti­on oppor­tu­ni­ties but also chal­lenges to artists, aut­hors, and other crea­tors and the way their crea­ti­ve con­tent is crea­ted, dis­tri­bu­ted, used and con­su­med. The deve­lo­p­ment and trai­ning of such models requi­re access to vast amounts of text, images, vide­os, and other data. Text and data mining tech­ni­ques may be used exten­si­ve­ly in this con­text for the retrie­val and ana­ly­sis of such con­tent, which may be pro­tec­ted by copy­right and rela­ted rights. Any use of copy­right pro­tec­ted con­tent requi­res the aut­ho­ri­sa­ti­on of the rights­hol­der con­cer­ned unless rele­vant copy­right excep­ti­ons and limi­ta­ti­ons app­ly. Direc­ti­ve (EU) 2019/790 intro­du­ced excep­ti­ons and limi­ta­ti­ons allo­wing repro­duc­tions and extra­c­tions of works or other sub­ject mat­ter, for the pur­po­se of text and data mining, under cer­tain con­di­ti­ons. Under the­se rules, rights­hol­ders may choo­se to reser­ve their rights over their works or other sub­ject mat­ter to pre­vent text and data mining, unless this is done for the pur­po­ses of sci­en­ti­fic rese­arch. Whe­re the rights to opt out has been express­ly reser­ved in an appro­pria­te man­ner, pro­vi­ders of gene­ral-pur­po­se AI models need to obtain an aut­ho­ri­sa­ti­on from rights­hol­ders if they want to car­ry out text and data mining over such works.

(106) Pro­vi­ders that place gene­ral-pur­po­se AI models on the Uni­on mar­ket should ensu­re com­pli­ance with the rele­vant obli­ga­ti­ons in this Regu­la­ti­on. To that end, pro­vi­ders of gene­ral-pur­po­se AI models should put in place a poli­cy to com­ply with Uni­on law on copy­right and rela­ted rights, in par­ti­cu­lar to iden­ti­fy and com­ply with the reser­va­ti­on of rights expres­sed by rights­hol­ders pur­su­ant to Artic­le 4(3) of Direc­ti­ve (EU) 2019/790. Any pro­vi­der pla­cing a gene­ral-pur­po­se AI model on the Uni­on mar­ket should com­ply with this obli­ga­ti­on, regard­less of the juris­dic­tion in which the copy­right-rele­vant acts under­pin­ning the trai­ning of tho­se gene­ral-pur­po­se AI models take place. This is neces­sa­ry to ensu­re a level play­ing field among pro­vi­ders of gene­ral-pur­po­se AI models whe­re no pro­vi­der should be able to gain a com­pe­ti­ti­ve advan­ta­ge in the Uni­on mar­ket by app­ly­ing lower copy­right stan­dards than tho­se pro­vi­ded in the Union. 

(108) With regard to the obli­ga­ti­ons impo­sed on pro­vi­ders of gene­ral-pur­po­se AI models to put in place a poli­cy to com­ply with Uni­on copy­right law and make publicly available a sum­ma­ry of the con­tent used for the trai­ning, the AI Office should moni­tor whe­ther the pro­vi­der has ful­fil­led tho­se obli­ga­ti­ons wit­hout veri­fy­ing or pro­ce­e­ding to a work-by-work assess­ment of the trai­ning data in terms of copy­right com­pli­ance. This Regu­la­ti­on does not affect the enforce­ment of copy­right rules as pro­vi­ded for under Uni­on law.

(d) draw up and make publicly available a suf­fi­ci­ent­ly detail­ed sum­ma­ry about the con­tent used for trai­ning of the gene­ral-pur­po­se AI model, accor­ding to a tem­p­la­te pro­vi­ded by the AI Office.

(107) In order to increa­se trans­pa­ren­cy on the data that is used in the pre-trai­ning and trai­ning of gene­ral-pur­po­se AI models, inclu­ding text and data pro­tec­ted by copy­right law, it is ade­qua­te that pro­vi­ders of such models draw up and make publicly available a suf­fi­ci­ent­ly detail­ed sum­ma­ry of the con­tent used for trai­ning the gene­ral-pur­po­se AI model. While taking into due account the need to pro­tect trade secrets and con­fi­den­ti­al busi­ness infor­ma­ti­on, this sum­ma­ry should be gene­ral­ly com­pre­hen­si­ve in its scope instead of tech­ni­cal­ly detail­ed to faci­li­ta­te par­ties with legi­ti­ma­te inte­rests, inclu­ding copy­right hol­ders, to exer­cise and enforce their rights under Uni­on law, for exam­p­le by listing the main data coll­ec­tions or sets that went into trai­ning the model, such as lar­ge pri­va­te or public data­ba­ses or data archi­ves, and by pro­vi­ding a nar­ra­ti­ve expl­ana­ti­on about other data sources used. It is appro­pria­te for the AI Office to pro­vi­de a tem­p­la­te for the sum­ma­ry, which should be simp­le, effec­ti­ve, and allow the pro­vi­der to pro­vi­de the requi­red sum­ma­ry in nar­ra­ti­ve form.

2. The obli­ga­ti­ons set out in para­graph 1, points (a) and (b), shall not app­ly to pro­vi­ders of AI models that are released under a free and open-source licence that allo­ws for the access, usa­ge, modi­fi­ca­ti­on, and dis­tri­bu­ti­on of the model, and who­se para­me­ters, inclu­ding the weights, the infor­ma­ti­on on the model archi­tec­tu­re, and the infor­ma­ti­on on model usa­ge, are made publicly available. This excep­ti­on shall not app­ly to gene­ral-pur­po­se AI models with syste­mic risks.

(102) Soft­ware and data, inclu­ding models, released under a free and open-source licence that allo­ws them to be open­ly shared and whe­re users can free­ly access, use, modi­fy and redis­tri­bu­te them or modi­fi­ed ver­si­ons the­reof, can con­tri­bu­te to rese­arch and inno­va­ti­on in the mar­ket and can pro­vi­de signi­fi­cant growth oppor­tu­ni­ties for the Uni­on eco­no­my. Gene­ral-pur­po­se AI models released under free and open-source licen­ces should be con­side­red to ensu­re high levels of trans­pa­ren­cy and open­ness if their para­me­ters, inclu­ding the weights, the infor­ma­ti­on on the model archi­tec­tu­re, and the infor­ma­ti­on on model usa­ge are made publicly available. The licence should be con­side­red to be free and open-source also when it allo­ws users to run, copy, dis­tri­bu­te, stu­dy, chan­ge and impro­ve soft­ware and data, inclu­ding models under the con­di­ti­on that the ori­gi­nal pro­vi­der of the model is cre­di­ted, the iden­ti­cal or com­pa­ra­ble terms of dis­tri­bu­ti­on are respected.

3. Pro­vi­ders of gene­ral-pur­po­se AI models shall coope­ra­te as neces­sa­ry with the Com­mis­si­on and the natio­nal com­pe­tent aut­ho­ri­ties in the exer­cise of their com­pe­ten­ces and powers pur­su­ant to this Regulation.

4. Pro­vi­ders of gene­ral-pur­po­se AI models may rely on codes of prac­ti­ce within the mea­ning of Artic­le 56 to demon­stra­te com­pli­ance with the obli­ga­ti­ons set out in para­graph 1 of this Artic­le, until a har­mo­ni­s­ed stan­dard is published. Com­pli­ance with Euro­pean har­mo­ni­s­ed stan­dards grants pro­vi­ders the pre­sump­ti­on of con­for­mi­ty to the ext­ent that tho­se stan­dards cover tho­se obli­ga­ti­ons. Pro­vi­ders of gene­ral-pur­po­se AI models who do not adhe­re to an appro­ved code of prac­ti­ce or do not com­ply with a Euro­pean har­mo­ni­s­ed stan­dard shall demon­stra­te alter­na­ti­ve ade­qua­te means of com­pli­ance for assess­ment by the Commission.

5. For the pur­po­se of faci­li­ta­ting com­pli­ance with Annex XI, in par­ti­cu­lar points 2 (d) and (e) the­reof, the Com­mis­si­on is empowered to adopt dele­ga­ted acts in accordance with Artic­le 97 to detail mea­su­re­ment and cal­cu­la­ti­on metho­do­lo­gies with a view to allo­wing for com­pa­ra­ble and veri­fia­ble documentation.

6. The Com­mis­si­on is empowered to adopt dele­ga­ted acts in accordance with Artic­le 97(2) to amend Anne­xes XI and XII in light of evol­ving tech­no­lo­gi­cal developments.

7. Any infor­ma­ti­on or docu­men­ta­ti­on obtai­ned pur­su­ant to this Artic­le, inclu­ding trade secrets, shall be trea­ted in accordance with the con­fi­den­tia­li­ty obli­ga­ti­ons set out in Artic­le 78.

(85) Gene­ral-pur­po­se AI systems may be used as high-risk AI systems by them­sel­ves or be com­pon­ents of other high-risk AI systems. The­r­e­fo­re, due to their par­ti­cu­lar natu­re and in order to ensu­re a fair sha­ring of respon­si­bi­li­ties along the AI value chain, the pro­vi­ders of such systems should, irre­spec­ti­ve of whe­ther they may be used as high-risk AI systems as such by other pro­vi­ders or as com­pon­ents of high-risk AI systems and unless pro­vi­ded other­wi­se under this Regu­la­ti­on, clo­se­ly coope­ra­te with the pro­vi­ders of the rele­vant high-risk AI systems to enable their com­pli­ance with the rele­vant obli­ga­ti­ons under this Regu­la­ti­on and with the com­pe­tent aut­ho­ri­ties estab­lished under this Regulation.

Artic­le 54 Aut­ho­ri­sed repre­sen­ta­ti­ves of pro­vi­ders of gene­ral-pur­po­se AI models

1. Pri­or to pla­cing a gene­ral-pur­po­se AI model on the Uni­on mar­ket, pro­vi­ders estab­lished in third count­ries shall, by writ­ten man­da­te, appoint an aut­ho­ri­sed repre­sen­ta­ti­ve which is estab­lished in the Union.

2. The pro­vi­der shall enable its aut­ho­ri­sed repre­sen­ta­ti­ve to per­form the tasks spe­ci­fi­ed in the man­da­te recei­ved from the provider.

3. The aut­ho­ri­sed repre­sen­ta­ti­ve shall per­form the tasks spe­ci­fi­ed in the man­da­te recei­ved from the pro­vi­der. It shall pro­vi­de a copy of the man­da­te to the AI Office upon request, in one of the offi­ci­al lan­guages of the insti­tu­ti­ons of the Uni­on. For the pur­po­ses of this Regu­la­ti­on, the man­da­te shall empower the aut­ho­ri­sed repre­sen­ta­ti­ve to car­ry out the fol­lo­wing tasks:

(a) veri­fy that the tech­ni­cal docu­men­ta­ti­on spe­ci­fi­ed in Annex XI has been drawn up and all obli­ga­ti­ons refer­red to in Artic­le 53 and, whe­re appli­ca­ble, Artic­le 55 have been ful­fil­led by the provider;

(b) keep a copy of the tech­ni­cal docu­men­ta­ti­on spe­ci­fi­ed in Annex XI at the dis­po­sal of the AI Office and natio­nal com­pe­tent aut­ho­ri­ties, for a peri­od of 10 years after the gene­ral-pur­po­se AI model has been pla­ced on the mar­ket, and the cont­act details of the pro­vi­der that appoin­ted the aut­ho­ri­sed representative;

(c) pro­vi­de the AI Office, upon a rea­so­ned request, with all the infor­ma­ti­on and docu­men­ta­ti­on, inclu­ding that refer­red to in point (b), neces­sa­ry to demon­stra­te com­pli­ance with the obli­ga­ti­ons in this Chapter;

(d) coope­ra­te with the AI Office and com­pe­tent aut­ho­ri­ties, upon a rea­so­ned request, in any action they take in rela­ti­on to the gene­ral-pur­po­se AI model, inclu­ding when the model is inte­gra­ted into AI systems pla­ced on the mar­ket or put into ser­vice in the Union.

4. The man­da­te shall empower the aut­ho­ri­sed repre­sen­ta­ti­ve to be addres­sed, in addi­ti­on to or instead of the pro­vi­der, by the AI Office or the com­pe­tent aut­ho­ri­ties, on all issues rela­ted to ensu­ring com­pli­ance with this Regulation.

5. The aut­ho­ri­sed repre­sen­ta­ti­ve shall ter­mi­na­te the man­da­te if it con­siders or has rea­son to con­sider the pro­vi­der to be acting con­tra­ry to its obli­ga­ti­ons pur­su­ant to this Regu­la­ti­on. In such a case, it shall also imme­dia­te­ly inform the AI Office about the ter­mi­na­ti­on of the man­da­te and the rea­sons therefor.

6. The obli­ga­ti­on set out in this Artic­le shall not app­ly to pro­vi­ders of gene­ral-pur­po­se AI models that are released under a free and open-source licence that allo­ws for the access, usa­ge, modi­fi­ca­ti­on, and dis­tri­bu­ti­on of the model, and who­se para­me­ters, inclu­ding the weights, the infor­ma­ti­on on the model archi­tec­tu­re, and the infor­ma­ti­on on model usa­ge, are made publicly available, unless the gene­ral-pur­po­se AI models pre­sent syste­mic risks.

Sec­tion 3 Obli­ga­ti­ons Of Pro­vi­ders Of Gene­ral Pur­po­se AI Models With Syste­mic Risk

Artic­le 55 Obli­ga­ti­ons of pro­vi­ders of gene­ral-pur­po­se AI models with syste­mic risk

(114) The pro­vi­ders of gene­ral-pur­po­se AI models pre­sen­ting syste­mic risks should be sub­ject, in addi­ti­on to the obli­ga­ti­ons pro­vi­ded for pro­vi­ders of gene­ral-pur­po­se AI models, to obli­ga­ti­ons aimed at iden­ti­fy­ing and miti­ga­ting tho­se risks and ensu­ring an ade­qua­te level of cyber­se­cu­ri­ty pro­tec­tion, regard­less of whe­ther it is pro­vi­ded as a stan­da­lo­ne model or embedded in an AI system or a pro­duct. To achie­ve tho­se objec­ti­ves, this Regu­la­ti­on should requi­re pro­vi­ders to per­form the neces­sa­ry model eva­lua­tions, in par­ti­cu­lar pri­or to its first pla­cing on the mar­ket, inclu­ding con­duc­ting and docu­men­ting adver­sa­ri­al test­ing of models, also, as appro­pria­te, through inter­nal or inde­pen­dent exter­nal test­ing. In addi­ti­on, pro­vi­ders of gene­ral-pur­po­se AI models with syste­mic risks should con­ti­nuous­ly assess and miti­ga­te syste­mic risks, inclu­ding for exam­p­le by put­ting in place risk-manage­ment poli­ci­es, such as accoun­ta­bi­li­ty and gover­nan­ce pro­ce­s­ses, imple­men­ting post-mar­ket moni­to­ring, taking appro­pria­te mea­su­res along the enti­re model’s life­cy­cle and coope­ra­ting with rele­vant actors along the AI value chain.

1. In addi­ti­on to the obli­ga­ti­ons listed in Artic­les 53 and 54, pro­vi­ders of gene­ral-pur­po­se AI models with syste­mic risk shall:

(a) per­form model eva­lua­ti­on in accordance with stan­dar­di­sed pro­to­cols and tools reflec­ting the sta­te of the art, inclu­ding con­duc­ting and docu­men­ting adver­sa­ri­al test­ing of the model with a view to iden­ti­fy­ing and miti­ga­ting syste­mic risks;

(b) assess and miti­ga­te pos­si­ble syste­mic risks at Uni­on level, inclu­ding their sources, that may stem from the deve­lo­p­ment, the pla­cing on the mar­ket, or the use of gene­ral-pur­po­se AI models with syste­mic risk; 

(c) keep track of, docu­ment, and report, wit­hout undue delay, to the AI Office and, as appro­pria­te, to natio­nal com­pe­tent aut­ho­ri­ties, rele­vant infor­ma­ti­on about serious inci­dents and pos­si­ble cor­rec­ti­ve mea­su­res to address them;

(d) ensu­re an ade­qua­te level of cyber­se­cu­ri­ty pro­tec­tion for the gene­ral-pur­po­se AI model with syste­mic risk and the phy­si­cal infras­truc­tu­re of the model.

2. Pro­vi­ders of gene­ral-pur­po­se AI models with syste­mic risk may rely on codes of prac­ti­ce within the mea­ning of Artic­le 56 to demon­stra­te com­pli­ance with the obli­ga­ti­ons set out in para­graph 1 of this Artic­le, until a har­mo­ni­s­ed stan­dard is published. Com­pli­ance with Euro­pean har­mo­ni­s­ed stan­dards grants pro­vi­ders the pre­sump­ti­on of con­for­mi­ty to the ext­ent that tho­se stan­dards cover tho­se obli­ga­ti­ons. Pro­vi­ders of gene­ral-pur­po­se AI models with syste­mic risks who do not adhe­re to an appro­ved code of prac­ti­ce or do not com­ply with a Euro­pean har­mo­ni­s­ed stan­dard shall demon­stra­te alter­na­ti­ve ade­qua­te means of com­pli­ance for assess­ment by the Commission.

3. Any infor­ma­ti­on or docu­men­ta­ti­on obtai­ned pur­su­ant to this Artic­le, inclu­ding trade secrets, shall be trea­ted in accordance with the con­fi­den­tia­li­ty obli­ga­ti­ons set out in Artic­le 78.

(115) Pro­vi­ders of gene­ral-pur­po­se AI models with syste­mic risks should assess and miti­ga­te pos­si­ble syste­mic risks. If, despi­te efforts to iden­ti­fy and pre­vent risks rela­ted to a gene­ral-pur­po­se AI model that may pre­sent syste­mic risks, the deve­lo­p­ment or use of the model cau­ses a serious inci­dent, the gene­ral-pur­po­se AI model pro­vi­der should wit­hout undue delay keep track of the inci­dent and report any rele­vant infor­ma­ti­on and pos­si­ble cor­rec­ti­ve mea­su­res to the Com­mis­si­on and natio­nal com­pe­tent authorities.

Fur­ther­mo­re, pro­vi­ders should ensu­re an ade­qua­te level of cyber­se­cu­ri­ty pro­tec­tion for the model and its phy­si­cal infras­truc­tu­re, if appro­pria­te, along the enti­re model life­cy­cle. Cyber­se­cu­ri­ty pro­tec­tion rela­ted to syste­mic risks asso­cia­ted with mali­cious use or attacks should duly con­sider acci­den­tal model leaka­ge, unaut­ho­ri­sed releases, cir­cum­ven­ti­on of safe­ty mea­su­res, and defence against cyber­at­tacks, unaut­ho­ri­sed access or model theft. That pro­tec­tion could be faci­li­ta­ted by secu­ring model weights, algo­rith­ms, ser­vers, and data sets, such as through ope­ra­tio­nal secu­ri­ty mea­su­res for infor­ma­ti­on secu­ri­ty, spe­ci­fic cyber­se­cu­ri­ty poli­ci­es, ade­qua­te tech­ni­cal and estab­lished solu­ti­ons, and cyber and phy­si­cal access con­trols, appro­pria­te to the rele­vant cir­cum­stances and the risks involved.

Sec­tion 4 Codes Of Practice

Artic­le 56 Codes of practice

1. The AI Office shall encou­ra­ge and faci­li­ta­te the dra­wing up of codes of prac­ti­ce at Uni­on level in order to con­tri­bu­te to the pro­per appli­ca­ti­on of this Regu­la­ti­on, taking into account inter­na­tio­nal approaches.

2. The AI Office and the Board shall aim to ensu­re that the codes of prac­ti­ce cover at least the obli­ga­ti­ons pro­vi­ded for in Artic­les 53 and 55, inclu­ding the fol­lo­wing issues:

(a) the means to ensu­re that the infor­ma­ti­on refer­red to in Artic­le 53(1), points (a) and (b), is kept up to date in light of mar­ket and tech­no­lo­gi­cal developments;

(b) the ade­qua­te level of detail for the sum­ma­ry about the con­tent used for training;

(c) the iden­ti­fi­ca­ti­on of the type and natu­re of the syste­mic risks at Uni­on level, inclu­ding their sources, whe­re appropriate;

(d) the mea­su­res, pro­ce­du­res and moda­li­ties for the assess­ment and manage­ment of the syste­mic risks at Uni­on level, inclu­ding the docu­men­ta­ti­on the­reof, which shall be pro­por­tio­na­te to the risks, take into con­side­ra­ti­on their seve­ri­ty and pro­ba­bi­li­ty and take into account the spe­ci­fic chal­lenges of tack­ling tho­se risks in light of the pos­si­ble ways in which such risks may emer­ge and mate­ria­li­se along the AI value chain.

3. The AI Office may invi­te all pro­vi­ders of gene­ral-pur­po­se AI models, as well as rele­vant natio­nal com­pe­tent aut­ho­ri­ties, to par­ti­ci­pa­te in the dra­wing-up of codes of prac­ti­ce. Civil socie­ty orga­ni­sa­ti­ons, indu­stry, aca­de­mia and other rele­vant stake­hol­ders, such as down­stream pro­vi­ders and inde­pen­dent experts, may sup­port the process.

4. The AI Office and the Board shall aim to ensu­re that the codes of prac­ti­ce cle­ar­ly set out their spe­ci­fic objec­ti­ves and con­tain com­mit­ments or mea­su­res, inclu­ding key per­for­mance indi­ca­tors as appro­pria­te, to ensu­re the achie­ve­ment of tho­se objec­ti­ves, and that they take due account of the needs and inte­rests of all inte­re­sted par­ties, inclu­ding affec­ted per­sons, at Uni­on level. 

The AI Office shall aim to ensu­re that par­ti­ci­pan­ts to the codes of prac­ti­ce report regu­lar­ly to the AI Office on the imple­men­ta­ti­on of the com­mit­ments and the mea­su­res taken and their out­co­mes, inclu­ding as mea­su­red against the key per­for­mance indi­ca­tors as appro­pria­te. Key per­for­mance indi­ca­tors and report­ing com­mit­ments shall reflect dif­fe­ren­ces in size and capa­ci­ty bet­ween various participants.

6. The AI Office and the Board shall regu­lar­ly moni­tor and eva­lua­te the achie­ve­ment of the objec­ti­ves of the codes of prac­ti­ce by the par­ti­ci­pan­ts and their con­tri­bu­ti­on to the pro­per appli­ca­ti­on of this Regu­la­ti­on. The AI Office and the Board shall assess whe­ther the codes of prac­ti­ce cover the obli­ga­ti­ons pro­vi­ded for in Artic­les 53 and 55, and shall regu­lar­ly moni­tor and eva­lua­te the achie­ve­ment of their objec­ti­ves. They shall publish their assess­ment of the ade­qua­cy of the codes of practice.

The Com­mis­si­on may, by way of an imple­men­ting act, appro­ve a code of prac­ti­ce and give it a gene­ral vali­di­ty within the Uni­on. That imple­men­ting act shall be adopted in accordance with the exami­na­ti­on pro­ce­du­re refer­red to in Artic­le 98(2).

7. The AI Office may invi­te all pro­vi­ders of gene­ral-pur­po­se AI models to adhe­re to the codes of prac­ti­ce. For pro­vi­ders of gene­ral-pur­po­se AI models not pre­sen­ting syste­mic risks this adherence may be limi­t­ed to the obli­ga­ti­ons pro­vi­ded for in Artic­le 53, unless they decla­re expli­ci­t­ly their inte­rest to join the full code.

8. The AI Office shall, as appro­pria­te, also encou­ra­ge and faci­li­ta­te the review and adap­t­ati­on of the codes of prac­ti­ce, in par­ti­cu­lar in light of emer­ging stan­dards. The AI Office shall assist in the assess­ment of available standards.

9. Codes of prac­ti­ce shall be rea­dy at the latest by … [nine months from the date of ent­ry into force of this Regu­la­ti­on]. The AI Office shall take the neces­sa­ry steps, inclu­ding invi­ting pro­vi­ders pur­su­ant to para­graph 7.

If, by … [12 months from the date of ent­ry into force], a code of prac­ti­ce can­not be fina­li­sed, or if the AI Office deems it is not ade­qua­te fol­lo­wing its assess­ment under para­graph 6 of this Artic­le, the Com­mis­si­on may pro­vi­de, by means of imple­men­ting acts, com­mon rules for the imple­men­ta­ti­on of the obli­ga­ti­ons pro­vi­ded for in Artic­les 53 and 55, inclu­ding the issues set out in para­graph 2 of this Artic­le. Tho­se imple­men­ting acts shall be adopted in accordance with the exami­na­ti­on pro­ce­du­re refer­red to in Artic­le 98(2).

(116) The AI Office should encou­ra­ge and faci­li­ta­te the dra­wing up, review and adap­t­ati­on of codes of prac­ti­ce, taking into account inter­na­tio­nal approa­ches. All pro­vi­ders of general¬purpose AI models could be invi­ted to par­ti­ci­pa­te. To ensu­re that the codes of prac­ti­ce reflect the sta­te of the art and duly take into account a diver­se set of per­spec­ti­ves, the AI Office should col­la­bo­ra­te with rele­vant natio­nal com­pe­tent aut­ho­ri­ties, and could, whe­re appro­pria­te, con­sult with civil socie­ty orga­ni­sa­ti­ons and other rele­vant stake­hol­ders and experts, inclu­ding the Sci­en­ti­fic Panel, for the dra­wing up of such codes. Codes of prac­ti­ce should cover obli­ga­ti­ons for pro­vi­ders of gene­ral-pur­po­se AI models and of gene­ral-pur­po­se AI models pre­sen­ting syste­mic risks. In addi­ti­on, as regards syste­mic risks, codes of prac­ti­ce should help to estab­lish a risk taxo­no­my of the type and natu­re of the syste­mic risks at Uni­on level, inclu­ding their sources. Codes of prac­ti­ce should also be focu­sed on spe­ci­fic risk assess­ment and miti­ga­ti­on measures. 

(117) The codes of prac­ti­ce should repre­sent a cen­tral tool for the pro­per com­pli­ance with the obli­ga­ti­ons pro­vi­ded for under this Regu­la­ti­on for pro­vi­ders of gene­ral-pur­po­se AI models. Pro­vi­ders should be able to rely on codes of prac­ti­ce to demon­stra­te com­pli­ance with the obli­ga­ti­ons. By means of imple­men­ting acts, the Com­mis­si­on may deci­de to appro­ve a code of prac­ti­ce and give it a gene­ral vali­di­ty within the Uni­on, or, alter­na­tively, to pro­vi­de com­mon rules for the imple­men­ta­ti­on of the rele­vant obli­ga­ti­ons, if, by the time this Regu­la­ti­on beco­mes appli­ca­ble, a code of prac­ti­ce can­not be fina­li­sed or is not dee­med ade­qua­te by the AI Office. Once a har­mo­ni­s­ed stan­dard is published and asses­sed as sui­ta­ble to cover the rele­vant obli­ga­ti­ons by the AI Office, com­pli­ance with a Euro­pean har­mo­ni­s­ed stan­dard should grant pro­vi­ders the pre­sump­ti­on of con­for­mi­ty. Pro­vi­ders of gene­ral-pur­po­se AI models should fur­ther­mo­re be able to demon­stra­te com­pli­ance using alter­na­ti­ve ade­qua­te means, if codes of prac­ti­ce or har­mo­ni­s­ed stan­dards are not available, or they choo­se not to rely on those. 

Chap­ter VI Mea­su­res in sup­port of innovation

Artic­le 57 AI regu­la­to­ry sandboxes

1. Mem­ber Sta­tes shall ensu­re that their com­pe­tent aut­ho­ri­ties estab­lish at least one AI regu­la­to­ry sand­box at natio­nal level, which shall be ope­ra­tio­nal by … [24 months from the date of ent­ry into force of this Regu­la­ti­on]. That sand­box may also be estab­lished joint­ly with the com­pe­tent aut­ho­ri­ties of other Mem­ber Sta­tes. The Com­mis­si­on may pro­vi­de tech­ni­cal sup­port, advice and tools for the estab­lish­ment and ope­ra­ti­on of AI regu­la­to­ry sandboxes.

The obli­ga­ti­on under the first sub­pa­ra­graph may also be ful­fil­led by par­ti­ci­pa­ting in an exi­sting sand­box in so far as that par­ti­ci­pa­ti­on pro­vi­des an equi­va­lent level of natio­nal covera­ge for the par­ti­ci­pa­ting Mem­ber States. 

(138) AI is a rapid­ly deve­lo­ping fami­ly of tech­no­lo­gies that requi­res regu­la­to­ry over­sight and a safe and con­trol­led space for expe­ri­men­ta­ti­on, while ensu­ring respon­si­ble inno­va­ti­on and inte­gra­ti­on of appro­pria­te safe­guards and risk miti­ga­ti­on mea­su­res. To ensu­re a legal frame­work that pro­mo­tes inno­va­ti­on, is future-pro­of and resi­li­ent to dis­rup­ti­on, Mem­ber Sta­tes should ensu­re that their natio­nal com­pe­tent aut­ho­ri­ties estab­lish at least one AI regu­la­to­ry sand­box at natio­nal level to faci­li­ta­te the deve­lo­p­ment and test­ing of inno­va­ti­ve AI systems under strict regu­la­to­ry over­sight befo­re the­se systems are pla­ced on the mar­ket or other­wi­se put into ser­vice. Mem­ber Sta­tes could also ful­fil this obli­ga­ti­on through par­ti­ci­pa­ting in alre­a­dy exi­sting regu­la­to­ry sand­bo­xes or estab­li­shing joint­ly a sand­box with one or more Mem­ber Sta­tes’ com­pe­tent aut­ho­ri­ties, inso­far as this par­ti­ci­pa­ti­on pro­vi­des equi­va­lent level of natio­nal covera­ge for the par­ti­ci­pa­ting Mem­ber Sta­tes. AI regu­la­to­ry sand­bo­xes could be estab­lished in phy­si­cal, digi­tal or hybrid form and may accom­mo­da­te phy­si­cal as well as digi­tal pro­ducts. Estab­li­shing aut­ho­ri­ties should also ensu­re that the AI regu­la­to­ry sand­bo­xes have the ade­qua­te resour­ces for their func­tio­ning, inclu­ding finan­cial and human resources. 

2. Addi­tio­nal AI regu­la­to­ry sand­bo­xes at regio­nal or local level, or estab­lished joint­ly with the com­pe­tent aut­ho­ri­ties of other Mem­ber Sta­tes may also be established.

3. The Euro­pean Data Pro­tec­tion Super­vi­sor may also estab­lish an AI regu­la­to­ry sand­box for Uni­on insti­tu­ti­ons, bodies, offices and agen­ci­es, and may exer­cise the roles and the tasks of natio­nal com­pe­tent aut­ho­ri­ties in accordance with this Chapter.

4. Mem­ber Sta­tes shall ensu­re that the com­pe­tent aut­ho­ri­ties refer­red to in para­graphs 1 and 2 allo­ca­te suf­fi­ci­ent resour­ces to com­ply with this Artic­le effec­tively and in a time­ly man­ner. Whe­re appro­pria­te, natio­nal com­pe­tent aut­ho­ri­ties shall coope­ra­te with other rele­vant aut­ho­ri­ties, and may allow for the invol­vement of other actors within the AI eco­sy­stem. This Artic­le shall not affect other regu­la­to­ry sand­bo­xes estab­lished under Uni­on or natio­nal law. Mem­ber Sta­tes shall ensu­re an appro­pria­te level of coope­ra­ti­on bet­ween the aut­ho­ri­ties super­vi­sing tho­se other sand­bo­xes and the natio­nal com­pe­tent authorities. 

5. AI regu­la­to­ry sand­bo­xes estab­lished under para­graph 1 shall pro­vi­de for a con­trol­led envi­ron­ment that fosters inno­va­ti­on and faci­li­ta­tes the deve­lo­p­ment, trai­ning, test­ing and vali­da­ti­on of inno­va­ti­ve AI systems for a limi­t­ed time befo­re their being pla­ced on the mar­ket or put into ser­vice pur­su­ant to a spe­ci­fic sand­box plan agreed bet­ween the pro­vi­ders or pro­s­pec­ti­ve pro­vi­ders and the com­pe­tent aut­ho­ri­ty. Such sand­bo­xes may include test­ing in real world con­di­ti­ons super­vi­sed therein.

6. Com­pe­tent aut­ho­ri­ties shall pro­vi­de, as appro­pria­te, gui­dance, super­vi­si­on and sup­port within the AI regu­la­to­ry sand­box with a view to iden­ti­fy­ing risks, in par­ti­cu­lar to fun­da­men­tal rights, health and safe­ty, test­ing, miti­ga­ti­on mea­su­res, and their effec­ti­ve­ness in rela­ti­on to the obli­ga­ti­ons and requi­re­ments of this Regu­la­ti­on and, whe­re rele­vant, other Uni­on and natio­nal law super­vi­sed within the sandbox.

7. Com­pe­tent aut­ho­ri­ties shall pro­vi­de pro­vi­ders and pro­s­pec­ti­ve pro­vi­ders par­ti­ci­pa­ting in the AI regu­la­to­ry sand­box with gui­dance on regu­la­to­ry expec­ta­ti­ons and how to ful­fil the requi­re­ments and obli­ga­ti­ons set out in this Regulation. 

Upon request of the pro­vi­der or pro­s­pec­ti­ve pro­vi­der of the AI system, the com­pe­tent aut­ho­ri­ty shall pro­vi­de a writ­ten pro­of of the acti­vi­ties suc­cessful­ly car­ri­ed out in the sand­box. The com­pe­tent aut­ho­ri­ty shall also pro­vi­de an exit report detail­ing the acti­vi­ties car­ri­ed out in the sand­box and the rela­ted results and lear­ning out­co­mes. Pro­vi­ders may use such docu­men­ta­ti­on to demon­stra­te their com­pli­ance with this Regu­la­ti­on through the con­for­mi­ty assess­ment pro­cess or rele­vant mar­ket sur­veil­lan­ce acti­vi­ties. In this regard, the exit reports and the writ­ten pro­of pro­vi­ded by the natio­nal com­pe­tent aut­ho­ri­ty shall be taken posi­tively into account by mar­ket sur­veil­lan­ce aut­ho­ri­ties and noti­fi­ed bodies, with a view to acce­le­ra­ting con­for­mi­ty assess­ment pro­ce­du­res to a rea­sonable extent.

8. Sub­ject to the con­fi­den­tia­li­ty pro­vi­si­ons in Artic­le 78, and with the agree­ment of the pro­vi­der or pro­s­pec­ti­ve pro­vi­der, the Com­mis­si­on and the Board shall be aut­ho­ri­sed to access the exit reports and shall take them into account, as appro­pria­te, when exer­cis­ing their tasks under this Regu­la­ti­on. If both the pro­vi­der or pro­s­pec­ti­ve pro­vi­der and the natio­nal com­pe­tent aut­ho­ri­ty expli­ci­t­ly agree, the exit report may be made publicly available through the sin­gle infor­ma­ti­on plat­form refer­red to in this Article.

9. The estab­lish­ment of AI regu­la­to­ry sand­bo­xes shall aim to con­tri­bu­te to the fol­lo­wing objectives:

(a) impro­ving legal cer­tain­ty to achie­ve regu­la­to­ry com­pli­ance with this Regu­la­ti­on or, whe­re rele­vant, other appli­ca­ble Uni­on and natio­nal law;

(b) sup­port­ing the sha­ring of best prac­ti­ces through coope­ra­ti­on with the aut­ho­ri­ties invol­ved in the AI regu­la­to­ry sandbox;

(c) foste­ring inno­va­ti­on and com­pe­ti­ti­ve­ness and faci­li­ta­ting the deve­lo­p­ment of an AI ecosystem;

(d) con­tri­bu­ting to evi­dence-based regu­la­to­ry learning;

(e) faci­li­ta­ting and acce­le­ra­ting access to the Uni­on mar­ket for AI systems, in par­ti­cu­lar when pro­vi­ded by SMEs, inclu­ding start-ups.

10. Natio­nal com­pe­tent aut­ho­ri­ties shall ensu­re that, to the ext­ent the inno­va­ti­ve AI systems invol­ve the pro­ce­s­sing of per­so­nal data or other­wi­se fall under the super­vi­so­ry remit of other natio­nal aut­ho­ri­ties or com­pe­tent aut­ho­ri­ties pro­vi­ding or sup­port­ing access to data, the natio­nal data pro­tec­tion aut­ho­ri­ties and tho­se other natio­nal or com­pe­tent aut­ho­ri­ties are asso­cia­ted with the ope­ra­ti­on of the AI regu­la­to­ry sand­box and invol­ved in the super­vi­si­on of tho­se aspects to the ext­ent of their respec­ti­ve tasks and powers. 

11. The AI regu­la­to­ry sand­bo­xes shall not affect the super­vi­so­ry or cor­rec­ti­ve powers of the com­pe­tent aut­ho­ri­ties super­vi­sing the sand­bo­xes, inclu­ding at regio­nal or local level. Any signi­fi­cant risks to health and safe­ty and fun­da­men­tal rights iden­ti­fi­ed during the deve­lo­p­ment and test­ing of such AI systems shall result in an ade­qua­te mitigation. 

Natio­nal com­pe­tent aut­ho­ri­ties shall have the power to tem­po­r­a­ri­ly or per­ma­nent­ly sus­pend the test­ing pro­cess, or the par­ti­ci­pa­ti­on in the sand­box if no effec­ti­ve miti­ga­ti­on is pos­si­ble, and shall inform the AI Office of such decis­i­on. Natio­nal com­pe­tent aut­ho­ri­ties shall exer­cise their super­vi­so­ry powers within the limits of the rele­vant law, using their dis­cretio­na­ry powers when imple­men­ting legal pro­vi­si­ons in respect of a spe­ci­fic AI regu­la­to­ry sand­box pro­ject, with the objec­ti­ve of sup­port­ing inno­va­ti­on in AI in the Union.

12. Pro­vi­ders and pro­s­pec­ti­ve pro­vi­ders par­ti­ci­pa­ting in the AI regu­la­to­ry sand­box shall remain lia­ble under appli­ca­ble Uni­on and natio­nal lia­bi­li­ty law for any dama­ge inflic­ted on third par­ties as a result of the expe­ri­men­ta­ti­on taking place in the sand­box. Howe­ver, pro­vi­ded that the pro­s­pec­ti­ve pro­vi­ders obser­ve the spe­ci­fic plan and the terms and con­di­ti­ons for their par­ti­ci­pa­ti­on and fol­low in good faith the gui­dance given by the natio­nal com­pe­tent aut­ho­ri­ty, no admi­ni­stra­ti­ve fines shall be impo­sed by the aut­ho­ri­ties for inf­rin­ge­ments of this Regu­la­ti­on. Whe­re other com­pe­tent aut­ho­ri­ties respon­si­ble for other Uni­on and natio­nal law were actively invol­ved in the super­vi­si­on of the AI system in the sand­box and pro­vi­ded gui­dance for com­pli­ance, no admi­ni­stra­ti­ve fines shall be impo­sed regar­ding that law.

The AI regu­la­to­ry sand­bo­xes shall be desi­gned and imple­men­ted in such a way that, whe­re rele­vant, they faci­li­ta­te cross-bor­der coope­ra­ti­on bet­ween natio­nal com­pe­tent authorities.

14. Natio­nal com­pe­tent aut­ho­ri­ties shall coor­di­na­te their acti­vi­ties and coope­ra­te within the frame­work of the Board. 

15. Natio­nal com­pe­tent aut­ho­ri­ties shall inform the AI Office and the Board of the estab­lish­ment of a sand­box, and may ask them for sup­port and gui­dance. The AI Office shall make publicly available a list of plan­ned and exi­sting sand­bo­xes and keep it up to date in order to encou­ra­ge more inter­ac­tion in the AI regu­la­to­ry sand­bo­xes and cross-bor­der cooperation. 

16. Natio­nal com­pe­tent aut­ho­ri­ties shall sub­mit annu­al reports to the AI Office and to the Board, from one year after the estab­lish­ment of the AI regu­la­to­ry sand­box and every year the­re­af­ter until its ter­mi­na­ti­on, and a final report. Tho­se reports shall pro­vi­de infor­ma­ti­on on the pro­gress and results of the imple­men­ta­ti­on of tho­se sand­bo­xes, inclu­ding best prac­ti­ces, inci­dents, les­sons lear­nt and recom­men­da­ti­ons on their set­up and, whe­re rele­vant, on the appli­ca­ti­on and pos­si­ble revi­si­on of this Regu­la­ti­on, inclu­ding its dele­ga­ted and imple­men­ting acts, and on the appli­ca­ti­on of other Uni­on law super­vi­sed by the com­pe­tent aut­ho­ri­ties within the sand­box. The natio­nal com­pe­tent aut­ho­ri­ties shall make tho­se annu­al reports or abstracts the­reof available to the public, online. The Com­mis­si­on shall, whe­re appro­pria­te, take the annu­al reports into account when exer­cis­ing its tasks under this Regulation.

17. The Com­mis­si­on shall deve­lop a sin­gle and dedi­ca­ted inter­face con­tai­ning all rele­vant infor­ma­ti­on rela­ted to AI regu­la­to­ry sand­bo­xes to allow stake­hol­ders to inter­act with AI regu­la­to­ry sand­bo­xes and to rai­se enqui­ries with com­pe­tent aut­ho­ri­ties, and to seek non¬binding gui­dance on the con­for­mi­ty of inno­va­ti­ve pro­ducts, ser­vices, busi­ness models embed­ding AI tech­no­lo­gies, in accordance with Artic­le 62(1), point (c). The Com­mis­si­on shall proac­tively coor­di­na­te with natio­nal com­pe­tent aut­ho­ri­ties, whe­re relevant.

(139) The objec­ti­ves of the AI regu­la­to­ry sand­bo­xes should be to foster AI inno­va­ti­on by estab­li­shing a con­trol­led expe­ri­men­ta­ti­on and test­ing envi­ron­ment in the deve­lo­p­ment and pre-mar­ke­ting pha­se with a view to ensu­ring com­pli­ance of the inno­va­ti­ve AI systems with this Regu­la­ti­on and other rele­vant Uni­on and natio­nal law. Moreo­ver, the AI regu­la­to­ry sand­bo­xes should aim to enhan­ce legal cer­tain­ty for inno­va­tors and the com­pe­tent aut­ho­ri­ties’ over­sight and under­stan­ding of the oppor­tu­ni­ties, emer­ging risks and the impacts of AI use, to faci­li­ta­te regu­la­to­ry lear­ning for aut­ho­ri­ties and under­ta­kings, inclu­ding with a view to future adap­ti­ons of the legal frame­work, to sup­port coope­ra­ti­on and the sha­ring of best prac­ti­ces with the aut­ho­ri­ties invol­ved in the AI regu­la­to­ry sand­box, and to acce­le­ra­te access to mar­kets, inclu­ding by remo­ving bar­riers for SMEs, inclu­ding start-ups. AI regu­la­to­ry sand­bo­xes should be wide­ly available throug­hout the Uni­on, and par­ti­cu­lar atten­ti­on should be given to their acce­s­si­bi­li­ty for SMEs, inclu­ding start-ups. The par­ti­ci­pa­ti­on in the AI regu­la­to­ry sand­box should focus on issues that rai­se legal uncer­tain­ty for pro­vi­ders and pro­s­pec­ti­ve pro­vi­ders to inno­va­te, expe­ri­ment with AI in the Uni­on and con­tri­bu­te to evi­dence-based regu­la­to­ry lear­ning. The super­vi­si­on of the AI systems in the AI regu­la­to­ry sand­box should the­r­e­fo­re cover their deve­lo­p­ment, trai­ning, test­ing and vali­da­ti­on befo­re the systems are pla­ced on the mar­ket or put into ser­vice, as well as the noti­on and occur­rence of sub­stan­ti­al modi­fi­ca­ti­on that may requi­re a new con­for­mi­ty assess­ment pro­ce­du­re. Any signi­fi­cant risks iden­ti­fi­ed during the deve­lo­p­ment and test­ing of such AI systems should result in ade­qua­te miti­ga­ti­on and, fai­ling that, in the sus­pen­si­on of the deve­lo­p­ment and test­ing process.

Whe­re appro­pria­te, natio­nal com­pe­tent aut­ho­ri­ties estab­li­shing AI regu­la­to­ry sand­bo­xes should coope­ra­te with other rele­vant aut­ho­ri­ties, inclu­ding tho­se super­vi­sing the pro­tec­tion of fun­da­men­tal rights, and could allow for the invol­vement of other actors within the AI eco­sy­stem such as natio­nal or Euro­pean stan­dar­di­sati­on orga­ni­sa­ti­ons, noti­fi­ed bodies, test­ing and expe­ri­men­ta­ti­on faci­li­ties, rese­arch and expe­ri­men­ta­ti­on labs, Euro­pean Digi­tal Inno­va­ti­on Hubs and rele­vant stake­hol­der and civil socie­ty orga­ni­sa­ti­ons. To ensu­re uni­form imple­men­ta­ti­on across the Uni­on and eco­no­mies of sca­le, it is appro­pria­te to estab­lish com­mon rules for the AI regu­la­to­ry sand­bo­xes’ imple­men­ta­ti­on and a frame­work for coope­ra­ti­on bet­ween the rele­vant aut­ho­ri­ties invol­ved in the super­vi­si­on of the sand­bo­xes. AI regu­la­to­ry sand­bo­xes estab­lished under this Regu­la­ti­on should be wit­hout pre­ju­di­ce to other law allo­wing for the estab­lish­ment of other sand­bo­xes aiming to ensu­re com­pli­ance with law other than this Regu­la­ti­on. Whe­re appro­pria­te, rele­vant com­pe­tent aut­ho­ri­ties in char­ge of tho­se other regu­la­to­ry sand­bo­xes should con­sider the bene­fits of using tho­se sand­bo­xes also for the pur­po­se of ensu­ring com­pli­ance of AI systems with this Regu­la­ti­on. Upon agree­ment bet­ween the natio­nal com­pe­tent aut­ho­ri­ties and the par­ti­ci­pan­ts in the AI regu­la­to­ry sand­box, test­ing in real world con­di­ti­ons may also be ope­ra­ted and super­vi­sed in the frame­work of the AI regu­la­to­ry sandbox.

Artic­le 58 Detail­ed arran­ge­ments for, and func­tio­ning of, AI regu­la­to­ry sandboxes

1. In order to avo­id frag­men­ta­ti­on across the Uni­on, the Com­mis­si­on shall adopt imple­men­ting acts spe­ci­fy­ing the detail­ed arran­ge­ments for the estab­lish­ment, deve­lo­p­ment, imple­men­ta­ti­on, ope­ra­ti­on and super­vi­si­on of the AI regu­la­to­ry sand­bo­xes. The imple­men­ting acts shall include com­mon prin­ci­ples on the fol­lo­wing issues:

(a) eli­gi­bi­li­ty and sel­ec­tion cri­te­ria for par­ti­ci­pa­ti­on in the AI regu­la­to­ry sandbox;

(b) pro­ce­du­res for the appli­ca­ti­on, par­ti­ci­pa­ti­on, moni­to­ring, exi­ting from and ter­mi­na­ti­on of the AI regu­la­to­ry sand­box, inclu­ding the sand­box plan and the exit report;

(c) the terms and con­di­ti­ons appli­ca­ble to the participants.

Tho­se imple­men­ting acts shall be adopted in accordance with the exami­na­ti­on pro­ce­du­re refer­red to in Artic­le 98(2).

2. The imple­men­ting acts refer­red to in para­graph 1 shall ensure:

(a) that AI regu­la­to­ry sand­bo­xes are open to any app­ly­ing pro­vi­der or pro­s­pec­ti­ve pro­vi­der of an AI system who ful­fils eli­gi­bi­li­ty and sel­ec­tion cri­te­ria, which shall be trans­pa­rent and fair, and that natio­nal com­pe­tent aut­ho­ri­ties inform appli­cants of their decis­i­on within three months of the application;

(b) that AI regu­la­to­ry sand­bo­xes allow broad and equal access and keep up with demand for par­ti­ci­pa­ti­on; pro­vi­ders and pro­s­pec­ti­ve pro­vi­ders may also sub­mit appli­ca­ti­ons in part­ner­ships with deployers and other rele­vant third parties;

(c) that the detail­ed arran­ge­ments for, and con­di­ti­ons con­cer­ning AI regu­la­to­ry sand­bo­xes sup­port, to the best ext­ent pos­si­ble, fle­xi­bi­li­ty for natio­nal com­pe­tent aut­ho­ri­ties to estab­lish and ope­ra­te their AI regu­la­to­ry sandboxes;

(d) that access to the AI regu­la­to­ry sand­bo­xes is free of char­ge for SMEs, inclu­ding start-ups, wit­hout pre­ju­di­ce to excep­tio­nal costs that natio­nal com­pe­tent aut­ho­ri­ties may reco­ver in a fair and pro­por­tio­na­te manner;

(e) that they faci­li­ta­te pro­vi­ders and pro­s­pec­ti­ve pro­vi­ders, by means of the lear­ning out­co­mes of the AI regu­la­to­ry sand­bo­xes, in com­ply­ing with con­for­mi­ty assess­ment obli­ga­ti­ons under this Regu­la­ti­on and the vol­un­t­a­ry appli­ca­ti­on of the codes of con­duct refer­red to in Artic­le 95;

(f) that AI regu­la­to­ry sand­bo­xes faci­li­ta­te the invol­vement of other rele­vant actors within the AI eco­sy­stem, such as noti­fi­ed bodies and stan­dar­di­sati­on orga­ni­sa­ti­ons, SMEs, inclu­ding start-ups, enter­pri­ses, inno­va­tors, test­ing and expe­ri­men­ta­ti­on faci­li­ties, rese­arch and expe­ri­men­ta­ti­on labs and Euro­pean Digi­tal Inno­va­ti­on Hubs, cen­tres of excel­lence, indi­vi­du­al rese­ar­chers, in order to allow and faci­li­ta­te coope­ra­ti­on with the public and pri­va­te sectors;

(142) To ensu­re that AI leads to soci­al­ly and envi­ron­men­tal­ly bene­fi­ci­al out­co­mes, Mem­ber Sta­tes are encou­ra­ged to sup­port and pro­mo­te rese­arch and deve­lo­p­ment of AI solu­ti­ons in sup­port of soci­al­ly and envi­ron­men­tal­ly bene­fi­ci­al out­co­mes, such as AI-based solu­ti­ons to increa­se acce­s­si­bi­li­ty for per­sons with disa­bi­li­ties, tack­le socio-eco­no­mic ine­qua­li­ties, or meet envi­ron­men­tal tar­gets, by allo­ca­ting suf­fi­ci­ent resour­ces, inclu­ding public and Uni­on fun­ding, and, whe­re appro­pria­te and pro­vi­ded that the eli­gi­bi­li­ty and sel­ec­tion cri­te­ria are ful­fil­led, con­side­ring in par­ti­cu­lar pro­jects which pur­sue such objec­ti­ves. Such pro­jects should be based on the prin­ci­ple of inter­di­sci­pli­na­ry coope­ra­ti­on bet­ween AI deve­lo­pers, experts on ine­qua­li­ty and non-dis­cri­mi­na­ti­on, acce­s­si­bi­li­ty, con­su­mer, envi­ron­men­tal, and digi­tal rights, as well as academics. 

(g) that pro­ce­du­res, pro­ce­s­ses and admi­ni­stra­ti­ve requi­re­ments for appli­ca­ti­on, sel­ec­tion, par­ti­ci­pa­ti­on and exi­ting the AI regu­la­to­ry sand­box are simp­le, easi­ly intel­li­gi­ble, and cle­ar­ly com­mu­ni­ca­ted in order to faci­li­ta­te the par­ti­ci­pa­ti­on of SMEs, inclu­ding start-ups, with limi­t­ed legal and admi­ni­stra­ti­ve capa­ci­ties and are stream­lined across the Uni­on, in order to avo­id frag­men­ta­ti­on and that par­ti­ci­pa­ti­on in an AI regu­la­to­ry sand­box estab­lished by a Mem­ber Sta­te, or by the Euro­pean Data Pro­tec­tion Super­vi­sor is mutual­ly and uni­form­ly reco­g­nis­ed and car­ri­es the same legal effects across the Union;

(143) In order to pro­mo­te and pro­tect inno­va­ti­on, it is important that the inte­rests of SMEs, inclu­ding start-ups, that are pro­vi­ders or deployers of AI systems are taken into par­ti­cu­lar account. To that end, Mem­ber Sta­tes should deve­lop initia­ti­ves, which are tar­ge­ted at tho­se ope­ra­tors, inclu­ding on awa­re­ness rai­sing and infor­ma­ti­on com­mu­ni­ca­ti­on. Mem­ber Sta­tes should pro­vi­de SMEs, inclu­ding start-ups, that have a regi­stered office or a branch in the Uni­on, with prio­ri­ty access to the AI regu­la­to­ry sand­bo­xes pro­vi­ded that they ful­fil the eli­gi­bi­li­ty con­di­ti­ons and sel­ec­tion cri­te­ria and wit­hout pre­clu­ding other pro­vi­ders and pro­s­pec­ti­ve pro­vi­ders to access the sand­bo­xes pro­vi­ded the same con­di­ti­ons and cri­te­ria are ful­fil­led. Mem­ber Sta­tes should uti­li­se exi­sting chan­nels and whe­re appro­pria­te, estab­lish new dedi­ca­ted chan­nels for com­mu­ni­ca­ti­on with SMEs, inclu­ding start-ups, deployers, other inno­va­tors and, as appro­pria­te, local public aut­ho­ri­ties, to sup­port SMEs throug­hout their deve­lo­p­ment path by pro­vi­ding gui­dance and respon­ding to queries about the imple­men­ta­ti­on of this Regu­la­ti­on. Whe­re appro­pria­te, the­se chan­nels should work tog­e­ther to crea­te syn­er­gies and ensu­re homo­gen­ei­ty in their gui­dance to SMEs, inclu­ding start-ups, and deployers. Addi­tio­nal­ly, Mem­ber Sta­tes should faci­li­ta­te the par­ti­ci­pa­ti­on of SMEs and other rele­vant stake­hol­ders in the stan­dar­di­sati­on deve­lo­p­ment pro­ce­s­ses. Moreo­ver, the spe­ci­fic inte­rests and needs of pro­vi­ders that are SMEs, inclu­ding start-ups, should be taken into account when noti­fi­ed bodies set con­for­mi­ty assess­ment fees. The Com­mis­si­on should regu­lar­ly assess the cer­ti­fi­ca­ti­on and com­pli­ance costs for SMEs, inclu­ding start-ups, through trans­pa­rent con­sul­ta­ti­ons and should work with Mem­ber Sta­tes to lower such costs. 

For exam­p­le, trans­la­ti­on costs rela­ted to man­da­to­ry docu­men­ta­ti­on and com­mu­ni­ca­ti­on with aut­ho­ri­ties may con­sti­tu­te a signi­fi­cant cost for pro­vi­ders and other ope­ra­tors, in par­ti­cu­lar tho­se of a smal­ler sca­le. Mem­ber Sta­tes should pos­si­bly ensu­re that one of the lan­guages deter­mi­ned and accept­ed by them for rele­vant pro­vi­ders’ docu­men­ta­ti­on and for com­mu­ni­ca­ti­on with ope­ra­tors is one which is broad­ly under­s­tood by the lar­gest pos­si­ble num­ber of cross-bor­der deployers. In order to address the spe­ci­fic needs of SMEs, inclu­ding start-ups, the Com­mis­si­on should pro­vi­de stan­dar­di­sed tem­pla­tes for the are­as cover­ed by this Regu­la­ti­on, upon request of the Board. Addi­tio­nal­ly, the Com­mis­si­on should com­ple­ment Mem­ber Sta­tes’ efforts by pro­vi­ding a sin­gle infor­ma­ti­on plat­form with easy-to-use infor­ma­ti­on with regards to this Regu­la­ti­on for all pro­vi­ders and deployers, by orga­ni­s­ing appro­pria­te com­mu­ni­ca­ti­on cam­paigns to rai­se awa­re­ness about the obli­ga­ti­ons ari­sing from this Regu­la­ti­on, and by eva­lua­ting and pro­mo­ting the con­ver­gence of best prac­ti­ces in public pro­cu­re­ment pro­ce­du­res in rela­ti­on to AI systems. Medi­um-sized enter­pri­ses which until recent­ly qua­li­fi­ed as small enter­pri­ses within the mea­ning of the Annex to Com­mis­si­on Recom­men­da­ti­on 2003/361/EC should have access to tho­se sup­port mea­su­res, as tho­se new medi­um-sized enter­pri­ses may some­ti­mes lack the legal resour­ces and trai­ning neces­sa­ry to ensu­re pro­per under­stan­ding of, and com­pli­ance with, this Regulation.

(h) that par­ti­ci­pa­ti­on in the AI regu­la­to­ry sand­box is limi­t­ed to a peri­od that is appro­pria­te to the com­ple­xi­ty and sca­le of the pro­ject and that may be exten­ded by the natio­nal com­pe­tent authority;

(i) that AI regu­la­to­ry sand­bo­xes faci­li­ta­te the deve­lo­p­ment of tools and infras­truc­tu­re for test­ing, bench­mar­king, asses­sing and explai­ning dimen­si­ons of AI systems rele­vant for regu­la­to­ry lear­ning, such as accu­ra­cy, robust­ness and cyber­se­cu­ri­ty, as well as mea­su­res to miti­ga­te risks to fun­da­men­tal rights and socie­ty at large. 

Pro­s­pec­ti­ve pro­vi­ders in the AI regu­la­to­ry sand­bo­xes, in par­ti­cu­lar SMEs and start-ups, shall be direc­ted, whe­re rele­vant, to pre-deployment ser­vices such as gui­dance on the imple­men­ta­ti­on of this Regu­la­ti­on, to other value-adding ser­vices such as help with stan­dar­di­sati­on docu­ments and cer­ti­fi­ca­ti­on, test­ing and expe­ri­men­ta­ti­on faci­li­ties, Euro­pean Digi­tal Inno­va­ti­on Hubs and cen­tres of excellence.

4. Whe­re natio­nal com­pe­tent aut­ho­ri­ties con­sider aut­ho­ri­sing test­ing in real world con­di­ti­ons super­vi­sed within the frame­work of an AI regu­la­to­ry sand­box to be estab­lished under this Artic­le, they shall spe­ci­fi­cal­ly agree the terms and con­di­ti­ons of such test­ing and, in par­ti­cu­lar, the appro­pria­te safe­guards with the par­ti­ci­pan­ts, with a view to pro­tec­ting fun­da­men­tal rights, health and safe­ty. Whe­re appro­pria­te, they shall coope­ra­te with other natio­nal com­pe­tent aut­ho­ri­ties with a view to ensu­ring con­si­stent prac­ti­ces across the Union. 

Artic­le 59 Fur­ther pro­ce­s­sing of per­so­nal data for deve­lo­ping cer­tain AI systems

in the public inte­rest in the AI regu­la­to­ry sandbox

1. In the AI regu­la­to­ry sand­box„ per­so­nal data lawful­ly coll­ec­ted for other pur­po­ses may be pro­ce­s­sed sole­ly for the pur­po­se of deve­lo­ping, trai­ning and test­ing cer­tain AI systems in the sand­box when all of the fol­lo­wing con­di­ti­ons are met:

(a) AI systems shall be deve­lo­ped for safe­guar­ding sub­stan­ti­al public inte­rest by a public aut­ho­ri­ty or ano­ther natu­ral or legal per­son and in one or more of the fol­lo­wing areas:

(i) public safe­ty and public health, inclu­ding dise­a­se detec­tion, dia­gno­sis pre­ven­ti­on, con­trol and tre­at­ment and impro­ve­ment of health care systems;

(ii) a high level of pro­tec­tion and impro­ve­ment of the qua­li­ty of the envi­ron­ment, pro­tec­tion of bio­di­ver­si­ty, pro­tec­tion against pol­lu­ti­on, green tran­si­ti­on mea­su­res, cli­ma­te chan­ge miti­ga­ti­on and adap­t­ati­on measures; 

(iii) ener­gy sustainability;

(iv) safe­ty and resi­li­ence of trans­port systems and mobi­li­ty, cri­ti­cal infras­truc­tu­re and networks;

(v) effi­ci­en­cy and qua­li­ty of public admi­ni­stra­ti­on and public services;

(b) the data pro­ce­s­sed are neces­sa­ry for com­ply­ing with one or more of the requi­re­ments refer­red to in Chap­ter III, Sec­tion 2 whe­re tho­se requi­re­ments can­not effec­tively be ful­fil­led by pro­ce­s­sing anony­mi­sed, syn­the­tic or other non-per­so­nal data;

(c) the­re are effec­ti­ve moni­to­ring mecha­nisms to iden­ti­fy if any high risks to the rights and free­doms of the data sub­jects, as refer­red to in Artic­le 35 of Regu­la­ti­on (EU) 2016/679 and in Artic­le 39 of Regu­la­ti­on (EU) 2018/1725, may ari­se during the sand­box expe­ri­men­ta­ti­on, as well as respon­se mecha­nisms to prompt­ly miti­ga­te tho­se risks and, whe­re neces­sa­ry, stop the processing;

(d) any per­so­nal data to be pro­ce­s­sed in the con­text of the sand­box are in a func­tion­al­ly sepa­ra­te, iso­la­ted and pro­tec­ted data pro­ce­s­sing envi­ron­ment under the con­trol of the pro­s­pec­ti­ve pro­vi­der and only aut­ho­ri­sed per­sons have access to tho­se data;

(e) pro­vi­ders can fur­ther share the ori­gi­nal­ly coll­ec­ted data only in accordance with Uni­on data pro­tec­tion law; any per­so­nal data crea­ted in the sand­box can­not be shared out­side the sandbox;

(f) any pro­ce­s­sing of per­so­nal data in the con­text of the sand­box neither leads to mea­su­res or decis­i­ons affec­ting the data sub­jects nor does it affect the appli­ca­ti­on of their rights laid down in Uni­on law on the pro­tec­tion of per­so­nal data;

(g) any per­so­nal data pro­ce­s­sed in the con­text of the sand­box are pro­tec­ted by means of appro­pria­te tech­ni­cal and orga­ni­sa­tio­nal mea­su­res and dele­ted once the par­ti­ci­pa­ti­on in the sand­box has ter­mi­na­ted or the per­so­nal data has rea­ched the end of its reten­ti­on period;

(h) the logs of the pro­ce­s­sing of per­so­nal data in the con­text of the sand­box are kept for the dura­ti­on of the par­ti­ci­pa­ti­on in the sand­box, unless pro­vi­ded other­wi­se by Uni­on or natio­nal law;

(i) a com­ple­te and detail­ed descrip­ti­on of the pro­cess and ratio­na­le behind the trai­ning, test­ing and vali­da­ti­on of the AI system is kept tog­e­ther with the test­ing results as part of the tech­ni­cal docu­men­ta­ti­on refer­red to in Annex IV;

(j) a short sum­ma­ry of the AI pro­ject deve­lo­ped in the sand­box, its objec­ti­ves and expec­ted results is published on the web­site of the com­pe­tent aut­ho­ri­ties; this obli­ga­ti­on shall not cover sen­si­ti­ve ope­ra­tio­nal data in rela­ti­on to the acti­vi­ties of law enforce­ment, bor­der con­trol, immi­gra­ti­on or asyl­um authorities.

(140) This Regu­la­ti­on should pro­vi­de the legal basis for the pro­vi­ders and pro­s­pec­ti­ve pro­vi­ders in the AI regu­la­to­ry sand­box to use per­so­nal data coll­ec­ted for other pur­po­ses for deve­lo­ping cer­tain AI systems in the public inte­rest within the AI regu­la­to­ry sand­box, only under spe­ci­fi­ed con­di­ti­ons, in accordance with Artic­le 6(4) and Artic­le 9(2), point (g), of Regu­la­ti­on (EU) 2016/679, and Artic­les 5, 6 and 10 of Regu­la­ti­on (EU) 2018/1725, and wit­hout pre­ju­di­ce to Artic­le 4(2) and Artic­le 10 of Direc­ti­ve (EU) 2016/680. All other obli­ga­ti­ons of data con­trol­lers and rights of data sub­jects under Regu­la­ti­ons (EU) 2016/679 and (EU) 2018/1725 and Direc­ti­ve (EU) 2016/680 remain appli­ca­ble. In par­ti­cu­lar, this Regu­la­ti­on should not pro­vi­de a legal basis in the mea­ning of Artic­le 22(2), point (b) of Regu­la­ti­on (EU) 2016/679 and Artic­le 24(2), point (b) of Regu­la­ti­on (EU) 2018/1725. Pro­vi­ders and pro­s­pec­ti­ve pro­vi­ders in the AI regu­la­to­ry sand­box should ensu­re appro­pria­te safe­guards and coope­ra­te with the com­pe­tent aut­ho­ri­ties, inclu­ding by fol­lo­wing their gui­dance and acting expe­di­tious­ly and in good faith to ade­qua­te­ly miti­ga­te any iden­ti­fi­ed signi­fi­cant risks to safe­ty, health, and fun­da­men­tal rights that may ari­se during the deve­lo­p­ment, test­ing and expe­ri­men­ta­ti­on in that sandbox.

2. For the pur­po­ses of the pre­ven­ti­on, inve­sti­ga­ti­on, detec­tion or pro­se­cu­ti­on of cri­mi­nal offen­ces or the exe­cu­ti­on of cri­mi­nal pen­al­ties, inclu­ding safe­guar­ding against and pre­ven­ting thre­ats to public secu­ri­ty, under the con­trol and respon­si­bi­li­ty of law enforce­ment aut­ho­ri­ties, the pro­ce­s­sing of per­so­nal data in AI regu­la­to­ry sand­bo­xes shall be based on a spe­ci­fic Uni­on or natio­nal law and sub­ject to the same cumu­la­ti­ve con­di­ti­ons as refer­red to in para­graph 1.

3. Para­graph 1 is wit­hout pre­ju­di­ce to Uni­on or natio­nal law which exclu­des pro­ce­s­sing of per­so­nal data for other pur­po­ses than tho­se expli­ci­t­ly men­tio­ned in that law, as well as to Uni­on or natio­nal law lay­ing down the basis for the pro­ce­s­sing of per­so­nal data which is neces­sa­ry for the pur­po­se of deve­lo­ping, test­ing or trai­ning of inno­va­ti­ve AI systems or any other legal basis, in com­pli­ance with Uni­on law on the pro­tec­tion of per­so­nal data. 

Artic­le 60 Test­ing of high-risk AI systems in real world con­di­ti­ons out­side AI regu­la­to­ry sandboxes

The Com­mis­si­on shall, by means of imple­men­ting acts, spe­ci­fy the detail­ed ele­ments of the real-world test­ing plan. Tho­se imple­men­ting acts shall be adopted in accordance with the exami­na­ti­on pro­ce­du­re refer­red to in Artic­le 98(2).

This para­graph shall be wit­hout pre­ju­di­ce to Uni­on or natio­nal law on the test­ing in real world con­di­ti­ons of high-risk AI systems rela­ted to pro­ducts cover­ed by Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Annex I.

2. Pro­vi­ders or pro­s­pec­ti­ve pro­vi­ders may con­duct test­ing of high-risk AI systems refer­red to in Annex III in real world con­di­ti­ons at any time befo­re the pla­cing on the mar­ket or the put­ting into ser­vice of the AI system on their own or in part­ner­ship with one or more deployers or pro­s­pec­ti­ve deployers.

3. The test­ing of high-risk AI systems in real world con­di­ti­ons under this Artic­le shall be wit­hout pre­ju­di­ce to any ethi­cal review that is requi­red by Uni­on or natio­nal law.

4. Pro­vi­ders or pro­s­pec­ti­ve pro­vi­ders may con­duct the test­ing in real world con­di­ti­ons only whe­re all of the fol­lo­wing con­di­ti­ons are met:

(a) the pro­vi­der or pro­s­pec­ti­ve pro­vi­der has drawn up a real-world test­ing plan and sub­mit­ted it to the mar­ket sur­veil­lan­ce aut­ho­ri­ty in the Mem­ber Sta­te whe­re the test­ing in real world con­di­ti­ons is to be conducted;

(b) the mar­ket sur­veil­lan­ce aut­ho­ri­ty in the Mem­ber Sta­te whe­re the test­ing in real world con­di­ti­ons is to be con­duc­ted has appro­ved the test­ing in real world con­di­ti­ons and the real-world test­ing plan; whe­re the mar­ket sur­veil­lan­ce aut­ho­ri­ty has not pro­vi­ded an ans­wer within 30 days, the test­ing in real world con­di­ti­ons and the real-world test­ing plan shall be under­s­tood to have been appro­ved; whe­re natio­nal law does not pro­vi­de for a tacit appr­oval, the test­ing in real world con­di­ti­ons shall remain sub­ject to an authorisation; 

(c) the pro­vi­der or pro­s­pec­ti­ve pro­vi­der, with the excep­ti­on of pro­vi­ders or pro­s­pec­ti­ve pro­vi­ders of high-risk AI systems refer­red to in points 1, 6 and 7 of Annex III in the are­as of law enforce­ment, migra­ti­on, asyl­um and bor­der con­trol manage­ment, and high-risk AI systems refer­red to in point 2 of Annex III has regi­stered the test­ing in real world con­di­ti­ons in accordance with Artic­le 71(4) with a Uni­on-wide uni­que sin­gle iden­ti­fi­ca­ti­on num­ber and with the infor­ma­ti­on spe­ci­fi­ed in Annex IX; the pro­vi­der or pro­s­pec­ti­ve pro­vi­der of high-risk AI systems refer­red to in points

1, 6 and 7 of Annex III in the are­as of law enforce­ment, migra­ti­on, asyl­um and bor­der con­trol manage­ment, has regi­stered the test­ing in real-world con­di­ti­ons in the secu­re non-public sec­tion of the EU data­ba­se accor­ding to Artic­le 49(4), point (d), with a Uni­on-wide uni­que sin­gle iden­ti­fi­ca­ti­on num­ber and with the infor­ma­ti­on spe­ci­fi­ed the­r­ein; the pro­vi­der or pro­s­pec­ti­ve pro­vi­der of high-risk AI systems refer­red to in point 2 of Annex III has regi­stered the test­ing in real-world con­di­ti­ons in accordance with Artic­le 49(5);

(d) the pro­vi­der or pro­s­pec­ti­ve pro­vi­der con­duc­ting the test­ing in real world con­di­ti­ons is estab­lished in the Uni­on or has appoin­ted a legal repre­sen­ta­ti­ve who is estab­lished in the Union;

(e) data coll­ec­ted and pro­ce­s­sed for the pur­po­se of the test­ing in real world con­di­ti­ons shall be trans­fer­red to third count­ries only pro­vi­ded that appro­pria­te and appli­ca­ble safe­guards under Uni­on law are implemented;

(f) the test­ing in real world con­di­ti­ons does not last lon­ger than neces­sa­ry to achie­ve its objec­ti­ves and in any case not lon­ger than six months, which may be exten­ded for an addi­tio­nal peri­od of six months, sub­ject to pri­or noti­fi­ca­ti­on by the pro­vi­der or pro­s­pec­ti­ve pro­vi­der to the mar­ket sur­veil­lan­ce aut­ho­ri­ty, accom­pa­nied by an expl­ana­ti­on of the need for such an extension;

(g) the sub­jects of the test­ing in real world con­di­ti­ons who are per­sons belon­ging to vul­nerable groups due to their age or disa­bi­li­ty, are appro­pria­te­ly protected;

(h) whe­re a pro­vi­der or pro­s­pec­ti­ve pro­vi­der orga­ni­s­es the test­ing in real world con­di­ti­ons in coope­ra­ti­on with one or more deployers or pro­s­pec­ti­ve deployers, the lat­ter have been infor­med of all aspects of the test­ing that are rele­vant to their decis­i­on to par­ti­ci­pa­te, and given the rele­vant ins­truc­tions for use of the AI system refer­red to in Artic­le 13; the pro­vi­der or pro­s­pec­ti­ve pro­vi­der and the deployer or pro­s­pec­ti­ve deployer shall con­clude an agree­ment spe­ci­fy­ing their roles and respon­si­bi­li­ties with a view to ensu­ring com­pli­ance with the pro­vi­si­ons for test­ing in real world con­di­ti­ons under this Regu­la­ti­on and under other appli­ca­ble Uni­on and natio­nal law;

(i) the sub­jects of the test­ing in real world con­di­ti­ons have given infor­med con­sent in accordance with Artic­le 61, or in the case of law enforce­ment, whe­re the see­king of infor­med con­sent would pre­vent the AI system from being tested, the test­ing its­elf and the out­co­me of the test­ing in the real world con­di­ti­ons shall not have any nega­ti­ve effect on the sub­jects, and their per­so­nal data shall be dele­ted after the test is performed;

(j) the test­ing in real world con­di­ti­ons is effec­tively over­seen by the pro­vi­der or pro­s­pec­ti­ve pro­vi­der, as well as by deployers or pro­s­pec­ti­ve deployers through per­sons who are sui­ta­b­ly qua­li­fi­ed in the rele­vant field and have the neces­sa­ry capa­ci­ty, trai­ning and aut­ho­ri­ty to per­form their tasks;

(k) the pre­dic­tions, recom­men­da­ti­ons or decis­i­ons of the AI system can be effec­tively rever­sed and disregarded.

5. Any sub­jects of the test­ing in real world con­di­ti­ons, or their legal­ly desi­gna­ted repre­sen­ta­ti­ve, as appro­pria­te, may, wit­hout any resul­ting detri­ment and wit­hout having to pro­vi­de any justi­fi­ca­ti­on, with­draw from the test­ing at any time by revo­king their infor­med con­sent and may request the imme­dia­te and per­ma­nent dele­ti­on of their per­so­nal data. The with­dra­wal of the infor­med con­sent shall not affect the acti­vi­ties alre­a­dy car­ri­ed out.

6. In accordance with Artic­le 75, Mem­ber Sta­tes shall con­fer on their mar­ket sur­veil­lan­ce aut­ho­ri­ties the powers of requi­ring pro­vi­ders and pro­s­pec­ti­ve pro­vi­ders to pro­vi­de infor­ma­ti­on, of car­ry­ing out unan­noun­ced remo­te or on-site inspec­tions, and of per­forming checks on the con­duct of the test­ing in real world con­di­ti­ons and the rela­ted high-risk AI systems. Mar­ket sur­veil­lan­ce aut­ho­ri­ties shall use tho­se powers to ensu­re the safe deve­lo­p­ment of test­ing in real world conditions.

7. Any serious inci­dent iden­ti­fi­ed in the cour­se of the test­ing in real world con­di­ti­ons shall be repor­ted to the natio­nal mar­ket sur­veil­lan­ce aut­ho­ri­ty in accordance with Artic­le 73. The pro­vi­der or pro­s­pec­ti­ve pro­vi­der shall adopt imme­dia­te miti­ga­ti­on mea­su­res or, fai­ling that, shall sus­pend the test­ing in real world con­di­ti­ons until such miti­ga­ti­on takes place, or other­wi­se ter­mi­na­te it. The pro­vi­der or pro­s­pec­ti­ve pro­vi­der shall estab­lish a pro­ce­du­re for the prompt recall of the AI system upon such ter­mi­na­ti­on of the test­ing in real world conditions.

8. Pro­vi­ders or pro­s­pec­ti­ve pro­vi­ders shall noti­fy the natio­nal mar­ket sur­veil­lan­ce aut­ho­ri­ty in the Mem­ber Sta­te whe­re the test­ing in real world con­di­ti­ons is to be con­duc­ted of the sus­pen­si­on or ter­mi­na­ti­on of the test­ing in real world con­di­ti­ons and of the final outcomes.

9. The pro­vi­der or pro­s­pec­ti­ve pro­vi­der shall be lia­ble under appli­ca­ble Uni­on and natio­nal lia­bi­li­ty law for any dama­ge cau­sed in the cour­se of their test­ing in real world conditions. 

(141) In order to acce­le­ra­te the pro­cess of deve­lo­p­ment and the pla­cing on the mar­ket of the high-risk AI systems listed in an annex to this Regu­la­ti­on, it is important that pro­vi­ders or pro­s­pec­ti­ve pro­vi­ders of such systems may also bene­fit from a spe­ci­fic regime for test­ing tho­se systems in real world con­di­ti­ons, wit­hout par­ti­ci­pa­ting in an AI regu­la­to­ry sand­box. Howe­ver, in such cases, taking into account the pos­si­ble con­se­quen­ces of such test­ing on indi­vi­du­als, it should be ensu­red that appro­pria­te and suf­fi­ci­ent gua­ran­tees and con­di­ti­ons are intro­du­ced by this Regu­la­ti­on for pro­vi­ders or pro­s­pec­ti­ve pro­vi­ders. Such gua­ran­tees should include, inter alia, reque­st­ing infor­med con­sent of natu­ral per­sons to par­ti­ci­pa­te in test­ing in real world con­di­ti­ons, with the excep­ti­on of law enforce­ment whe­re the see­king of infor­med con­sent would pre­vent the AI system from being tested. Con­sent of sub­jects to par­ti­ci­pa­te in such test­ing under this Regu­la­ti­on is distinct from, and wit­hout pre­ju­di­ce to, con­sent of data sub­jects for the pro­ce­s­sing of their per­so­nal data under the rele­vant data pro­tec­tion law. 

It is also important to mini­mi­se the risks and enable over­sight by com­pe­tent aut­ho­ri­ties and the­r­e­fo­re requi­re pro­s­pec­ti­ve pro­vi­ders to have a real-world test­ing plan sub­mit­ted to com­pe­tent mar­ket sur­veil­lan­ce aut­ho­ri­ty, regi­ster the test­ing in dedi­ca­ted sec­tions in the EU data­ba­se sub­ject to some limi­t­ed excep­ti­ons, set limi­ta­ti­ons on the peri­od for which the test­ing can be done and requi­re addi­tio­nal safe­guards for per­sons belon­ging to cer­tain vul­nerable groups, as well as a writ­ten agree­ment defi­ning the roles and respon­si­bi­li­ties of pro­s­pec­ti­ve pro­vi­ders and deployers and effec­ti­ve over­sight by com­pe­tent per­son­nel invol­ved in the real world test­ing. Fur­ther­mo­re, it is appro­pria­te to envi­sa­ge addi­tio­nal safe­guards to ensu­re that the pre­dic­tions, recom­men­da­ti­ons or decis­i­ons of the AI system can be effec­tively rever­sed and dis­re­gard­ed and that per­so­nal data is pro­tec­ted and is dele­ted when the sub­jects have with­drawn their con­sent to par­ti­ci­pa­te in the test­ing wit­hout pre­ju­di­ce to their rights as data sub­jects under the Uni­on data pro­tec­tion law. As regards trans­fer of data, it is also appro­pria­te to envi­sa­ge that data coll­ec­ted and pro­ce­s­sed for the pur­po­se of test­ing in real-world con­di­ti­ons should be trans­fer­red to third count­ries only whe­re appro­pria­te and appli­ca­ble safe­guards under Uni­on law are imple­men­ted, in par­ti­cu­lar in accordance with bases for trans­fer of per­so­nal data under Uni­on law on data pro­tec­tion, while for non-per­so­nal data appro­pria­te safe­guards are put in place in accordance with Uni­on law, such as Regu­la­ti­ons (EU) 2022/868 and (EU) 2023/2854 of the Euro­pean Par­lia­ment and of the Council.

Artic­le 61 Infor­med con­sent to par­ti­ci­pa­te in test­ing in real world con­di­ti­ons out­side AI regu­la­to­ry sandboxes

1. For the pur­po­se of test­ing in real world con­di­ti­ons under Artic­le 60, free­ly-given infor­med con­sent shall be obtai­ned from the sub­jects of test­ing pri­or to their par­ti­ci­pa­ti­on in such test­ing and after their having been duly infor­med with con­cise, clear, rele­vant, and under­stan­da­ble infor­ma­ti­on regarding:

(a) the natu­re and objec­ti­ves of the test­ing in real world con­di­ti­ons and the pos­si­ble incon­ve­ni­ence that may be lin­ked to their participation;

(b) the con­di­ti­ons under which the test­ing in real world con­di­ti­ons is to be con­duc­ted, inclu­ding the expec­ted dura­ti­on of the sub­ject or sub­jects’ participation;

(c) their rights, and the gua­ran­tees regar­ding their par­ti­ci­pa­ti­on, in par­ti­cu­lar their right to refu­se to par­ti­ci­pa­te in, and the right to with­draw from, test­ing in real world con­di­ti­ons at any time wit­hout any resul­ting detri­ment and wit­hout having to pro­vi­de any justification;

(d) the arran­ge­ments for reque­st­ing the rever­sal or the dis­re­gar­ding of the pre­dic­tions, recom­men­da­ti­ons or decis­i­ons of the AI system;

(e) the Uni­on-wide uni­que sin­gle iden­ti­fi­ca­ti­on num­ber of the test­ing in real world con­di­ti­ons in accordance with Artic­le 60(4) point (c), and the cont­act details of the pro­vi­der or its legal repre­sen­ta­ti­ve from whom fur­ther infor­ma­ti­on can be obtained.

2. The infor­med con­sent shall be dated and docu­men­ted and a copy shall be given to the sub­jects of test­ing or their legal representative.

Artic­le 62 Mea­su­res for pro­vi­ders and deployers, in par­ti­cu­lar SMEs, inclu­ding start-ups

1. Mem­ber Sta­tes shall under­ta­ke the fol­lo­wing actions:

(a) pro­vi­de SMEs, inclu­ding start-ups, having a regi­stered office or a branch in the Uni­on, with prio­ri­ty access to the AI regu­la­to­ry sand­bo­xes, to the ext­ent that they ful­fil the eli­gi­bi­li­ty con­di­ti­ons and sel­ec­tion cri­te­ria; the prio­ri­ty access shall not pre­clude other SMEs, inclu­ding start-ups, other than tho­se refer­red to in this para­graph from access to the AI regu­la­to­ry sand­box, pro­vi­ded that they also ful­fil the eli­gi­bi­li­ty con­di­ti­ons and sel­ec­tion criteria;

(b) orga­ni­se spe­ci­fic awa­re­ness rai­sing and trai­ning acti­vi­ties on the appli­ca­ti­on of this Regu­la­ti­on tail­o­red to the needs of SMEs inclu­ding start-ups, deployers and, as appro­pria­te, local public authorities;

(c) uti­li­se exi­sting dedi­ca­ted chan­nels and whe­re appro­pria­te, estab­lish new ones for com­mu­ni­ca­ti­on with SMEs inclu­ding start-ups, deployers, other inno­va­tors and, as appro­pria­te, local public aut­ho­ri­ties to pro­vi­de advice and respond to queries about the imple­men­ta­ti­on of this Regu­la­ti­on, inclu­ding as regards par­ti­ci­pa­ti­on in AI regu­la­to­ry sandboxes;

(d) faci­li­ta­te the par­ti­ci­pa­ti­on of SMEs and other rele­vant stake­hol­ders in the stan­dar­di­sati­on deve­lo­p­ment process.

2. The spe­ci­fic inte­rests and needs of the SME pro­vi­ders, inclu­ding start-ups, shall be taken into account when set­ting the fees for con­for­mi­ty assess­ment under Artic­le 43, redu­cing tho­se fees pro­por­tio­na­te­ly to their size, mar­ket size and other rele­vant indicators.

3. The AI Office shall under­ta­ke the fol­lo­wing actions:

(a) pro­vi­de stan­dar­di­sed tem­pla­tes for are­as cover­ed by this Regu­la­ti­on, as spe­ci­fi­ed by the Board in its request;

(b) deve­lop and main­tain a sin­gle infor­ma­ti­on plat­form pro­vi­ding easy to use infor­ma­ti­on in rela­ti­on to this Regu­la­ti­on for all ope­ra­tors across the Union;

(c) orga­ni­se appro­pria­te com­mu­ni­ca­ti­on cam­paigns to rai­se awa­re­ness about the obli­ga­ti­ons ari­sing from this Regulation;

(d) eva­lua­te and pro­mo­te the con­ver­gence of best prac­ti­ces in public pro­cu­re­ment pro­ce­du­res in rela­ti­on to AI systems.

Artic­le 63 Dero­ga­ti­ons for spe­ci­fic operators

1. Microen­ter­pri­ses within the mea­ning of Recom­men­da­ti­on 2003/361/EC may com­ply with cer­tain ele­ments of the qua­li­ty manage­ment system requi­red by Artic­le 17 of this Regu­la­ti­on in a sim­pli­fi­ed man­ner, pro­vi­ded that they do not have part­ner enter­pri­ses or lin­ked enter­pri­ses within the mea­ning of that Recom­men­da­ti­on. For that pur­po­se, the Com­mis­si­on shall deve­lop gui­de­lines on the ele­ments of the qua­li­ty manage­ment system which may be com­plied with in a sim­pli­fi­ed man­ner con­side­ring the needs of microen­ter­pri­ses, wit­hout affec­ting the level of pro­tec­tion or the need for com­pli­ance with the requi­re­ments in respect of high-risk AI systems.

(146) Moreo­ver, in light of the very small size of some ope­ra­tors and in order to ensu­re pro­por­tio­na­li­ty regar­ding costs of inno­va­ti­on, it is appro­pria­te to allow microen­ter­pri­ses to ful­fil one of the most cost­ly obli­ga­ti­ons, name­ly to estab­lish a qua­li­ty manage­ment system, in a sim­pli­fi­ed man­ner which would redu­ce the admi­ni­stra­ti­ve bur­den and the costs for tho­se enter­pri­ses wit­hout affec­ting the level of pro­tec­tion and the need for com­pli­ance with the requi­re­ments for high-risk AI systems. The Com­mis­si­on should deve­lop gui­de­lines to spe­ci­fy the ele­ments of the qua­li­ty manage­ment system to be ful­fil­led in this sim­pli­fi­ed man­ner by microenterprises.

2. Para­graph 1 of this Artic­le shall not be inter­pre­ted as exemp­ting tho­se ope­ra­tors from ful­fil­ling any other requi­re­ments or obli­ga­ti­ons laid down in this Regu­la­ti­on, inclu­ding tho­se estab­lished in Artic­les 9, 10, 11, 12, 13, 14, 15, 72 and 73.

Chap­ter VII Governance

Sec­tion 1 Gover­nan­ce At Uni­on Level

Artic­le 64 AI Office

1. The Com­mis­si­on shall deve­lop Uni­on exper­ti­se and capa­bi­li­ties in the field of AI through the AI Office.

2. Mem­ber Sta­tes shall faci­li­ta­te the tasks ent­ru­sted to the AI Office, as reflec­ted in this Regulation.

(148) This Regu­la­ti­on should estab­lish a gover­nan­ce frame­work that both allo­ws to coor­di­na­te and sup­port the appli­ca­ti­on of this Regu­la­ti­on at natio­nal level, as well as build capa­bi­li­ties at Uni­on level and inte­gra­te stake­hol­ders in the field of AI. The effec­ti­ve imple­men­ta­ti­on and enforce­ment of this Regu­la­ti­on requi­re a gover­nan­ce frame­work that allo­ws to coor­di­na­te and build up cen­tral exper­ti­se at Uni­on level. The AI Office was estab­lished by Com­mis­si­on Decis­i­on and has as its mis­si­on to deve­lop Uni­on exper­ti­se and capa­bi­li­ties in the field of AI and to con­tri­bu­te to the imple­men­ta­ti­on of Uni­on law on AI. Mem­ber Sta­tes should faci­li­ta­te the tasks of the AI Office with a view to sup­port the deve­lo­p­ment of Uni­on exper­ti­se and capa­bi­li­ties at Uni­on level and to streng­then the func­tio­ning of the digi­tal sin­gle mar­ket. Fur­ther­mo­re, a Board com­po­sed of repre­sen­ta­ti­ves of the Mem­ber Sta­tes, a sci­en­ti­fic panel to inte­gra­te the sci­en­ti­fic com­mu­ni­ty and an advi­so­ry forum to con­tri­bu­te stake­hol­der input to the imple­men­ta­ti­on of this Regu­la­ti­on, at Uni­on and natio­nal level, should be estab­lished. The deve­lo­p­ment of Uni­on exper­ti­se and capa­bi­li­ties should also include making use of exi­sting resour­ces and exper­ti­se, in par­ti­cu­lar through syn­er­gies with struc­tures built up in the con­text of the Uni­on level enforce­ment of other law and syn­er­gies with rela­ted initia­ti­ves at Uni­on level, such as the EuroHPC Joint Under­ta­king and the AI test­ing and expe­ri­men­ta­ti­on faci­li­ties under the Digi­tal Euro­pe Programme.

Artic­le 65 Estab­lish­ment and struc­tu­re of the Euro­pean Arti­fi­ci­al Intel­li­gence Board

1. A Euro­pean Arti­fi­ci­al Intel­li­gence Board (the ‘Board’) is her­eby established.

2. The Board shall be com­po­sed of one repre­sen­ta­ti­ve per Mem­ber Sta­te. The Euro­pean Data Pro­tec­tion Super­vi­sor shall par­ti­ci­pa­te as obser­ver. The AI Office shall also attend the Board’s mee­tings, wit­hout taking part in the votes. Other natio­nal and Uni­on aut­ho­ri­ties, bodies or experts may be invi­ted to the mee­tings by the Board on a case by case basis, whe­re the issues dis­cus­sed are of rele­van­ce for them.

3. Each repre­sen­ta­ti­ve shall be desi­gna­ted by their Mem­ber Sta­te for a peri­od of three years, rene­wa­ble once.

4. Mem­ber Sta­tes shall ensu­re that their repre­sen­ta­ti­ves on the Board:

(a) have the rele­vant com­pe­ten­ces and powers in their Mem­ber Sta­te so as to con­tri­bu­te actively to the achie­ve­ment of the Board’s tasks refer­red to in Artic­le 66;

(b) are desi­gna­ted as a sin­gle cont­act point vis-à-vis the Board and, whe­re appro­pria­te, taking into account Mem­ber Sta­tes’ needs, as a sin­gle cont­act point for stakeholders;

(c) are empowered to faci­li­ta­te con­si­sten­cy and coor­di­na­ti­on bet­ween natio­nal com­pe­tent aut­ho­ri­ties in their Mem­ber Sta­te as regards the imple­men­ta­ti­on of this Regu­la­ti­on, inclu­ding through the coll­ec­tion of rele­vant data and infor­ma­ti­on for the pur­po­se of ful­fil­ling their tasks on the Board.

5. The desi­gna­ted repre­sen­ta­ti­ves of the Mem­ber Sta­tes shall adopt the Board’s rules of pro­ce­du­re by a two-thirds majo­ri­ty. The rules of pro­ce­du­re shall, in par­ti­cu­lar, lay down pro­ce­du­res for the sel­ec­tion pro­cess, the dura­ti­on of the man­da­te of, and spe­ci­fi­ca­ti­ons of the tasks of, the Chair, detail­ed arran­ge­ments for voting, and the orga­ni­sa­ti­on of the Board’s acti­vi­ties and tho­se of its sub-groups.

6. The Board shall estab­lish two stan­ding sub-groups to pro­vi­de a plat­form for coope­ra­ti­on and exch­an­ge among mar­ket sur­veil­lan­ce aut­ho­ri­ties and noti­fy­ing aut­ho­ri­ties about issues rela­ted to mar­ket sur­veil­lan­ce and noti­fi­ed bodies respectively.

The stan­ding sub-group for mar­ket sur­veil­lan­ce should act as the admi­ni­stra­ti­ve coope­ra­ti­on group (ADCO) for this Regu­la­ti­on within the mea­ning of Artic­le 30 of Regu­la­ti­on (EU) 2019/1020.

The Board may estab­lish other stan­ding or tem­po­ra­ry sub-groups as appro­pria­te for the pur­po­se of exami­ning spe­ci­fic issues. Whe­re appro­pria­te, repre­sen­ta­ti­ves of the advi­so­ry forum refer­red to in Artic­le 67 may be invi­ted to such sub-groups or to spe­ci­fic mee­tings of tho­se sub­groups as observers.

7. The Board shall be orga­ni­s­ed and ope­ra­ted so as to safe­guard the objec­ti­vi­ty and impar­tia­li­ty of its activities.

8. The Board shall be chai­red by one of the repre­sen­ta­ti­ves of the Mem­ber Sta­tes. The AI Office shall pro­vi­de the secre­ta­ri­at for the Board, con­ve­ne the mee­tings upon request of the Chair, and prepa­re the agen­da in accordance with the tasks of the Board pur­su­ant to this Regu­la­ti­on and its rules of procedure.

(149) In order to faci­li­ta­te a smooth, effec­ti­ve and har­mo­ni­s­ed imple­men­ta­ti­on of this Regu­la­ti­on a Board should be estab­lished. The Board should reflect the various inte­rests of the AI eco-system and be com­po­sed of repre­sen­ta­ti­ves of the Mem­ber Sta­tes. The Board should be respon­si­ble for a num­ber of advi­so­ry tasks, inclu­ding issuing opi­ni­ons, recom­men­da­ti­ons, advice or con­tri­bu­ting to gui­dance on mat­ters rela­ted to the imple­men­ta­ti­on of this Regu­la­ti­on, inclu­ding on enforce­ment mat­ters, tech­ni­cal spe­ci­fi­ca­ti­ons or exi­sting stan­dards regar­ding the requi­re­ments estab­lished in this Regu­la­ti­on and pro­vi­ding advice to the Com­mis­si­on and the Mem­ber Sta­tes and their natio­nal com­pe­tent aut­ho­ri­ties on spe­ci­fic que­sti­ons rela­ted to AI. In order to give some fle­xi­bi­li­ty to Mem­ber Sta­tes in the desi­gna­ti­on of their repre­sen­ta­ti­ves in the Board, such repre­sen­ta­ti­ves may be any per­sons belon­ging to public enti­ties who should have the rele­vant com­pe­ten­ces and powers to faci­li­ta­te coor­di­na­ti­on at natio­nal level and con­tri­bu­te to the achie­ve­ment of the Board’s tasks. The Board should estab­lish two stan­ding sub-groups to pro­vi­de a plat­form for coope­ra­ti­on and exch­an­ge among mar­ket sur­veil­lan­ce aut­ho­ri­ties and noti­fy­ing aut­ho­ri­ties on issues rela­ted, respec­tively, to mar­ket sur­veil­lan­ce and noti­fi­ed bodies. The stan­ding sub­group for mar­ket sur­veil­lan­ce should act as the admi­ni­stra­ti­ve coope­ra­ti­on group (ADCO) for this Regu­la­ti­on within the mea­ning of Artic­le 30 of Regu­la­ti­on (EU) 2019/1020. In accordance with Artic­le 33 of that Regu­la­ti­on, the Com­mis­si­on should sup­port the acti­vi­ties of the stan­ding sub­group for mar­ket sur­veil­lan­ce by under­ta­king mar­ket eva­lua­tions or stu­dies, in par­ti­cu­lar with a view to iden­ti­fy­ing aspects of this Regu­la­ti­on requi­ring spe­ci­fic and urgent coor­di­na­ti­on among mar­ket sur­veil­lan­ce aut­ho­ri­ties. The Board may estab­lish other stan­ding or tem­po­ra­ry sub-groups as appro­pria­te for the pur­po­se of exami­ning spe­ci­fic issues. The Board should also coope­ra­te, as appro­pria­te, with rele­vant Uni­on bodies, experts groups and net­works acti­ve in the con­text of rele­vant Uni­on law, inclu­ding in par­ti­cu­lar tho­se acti­ve under rele­vant Uni­on law on data, digi­tal pro­ducts and services.

Artic­le 66 Tasks of the Board

The Board shall advi­se and assist the Com­mis­si­on and the Mem­ber Sta­tes in order to faci­li­ta­te the con­si­stent and effec­ti­ve appli­ca­ti­on of this Regu­la­ti­on. To that end, the Board may in particular:

(a) con­tri­bu­te to the coor­di­na­ti­on among natio­nal com­pe­tent aut­ho­ri­ties respon­si­ble for the appli­ca­ti­on of this Regu­la­ti­on and, in coope­ra­ti­on with and sub­ject to the agree­ment of the mar­ket sur­veil­lan­ce aut­ho­ri­ties con­cer­ned, sup­port joint acti­vi­ties of mar­ket sur­veil­lan­ce aut­ho­ri­ties refer­red to in Artic­le 74(11);

(b) coll­ect and share tech­ni­cal and regu­la­to­ry exper­ti­se and best prac­ti­ces among Mem­ber States;

(c) pro­vi­de advice on the imple­men­ta­ti­on of this Regu­la­ti­on, in par­ti­cu­lar as regards the enforce­ment of rules on gene­ral-pur­po­se AI models;

(d) con­tri­bu­te to the har­mo­ni­sa­ti­on of admi­ni­stra­ti­ve prac­ti­ces in the Mem­ber Sta­tes, inclu­ding in rela­ti­on to the dero­ga­ti­on from the con­for­mi­ty assess­ment pro­ce­du­res refer­red to in Artic­le 46, the func­tio­ning of AI regu­la­to­ry sand­bo­xes, and test­ing in real world con­di­ti­ons refer­red to in Artic­les 57, 59 and 60;

(e) at the request of the Com­mis­si­on or on its own initia­ti­ve, issue recom­men­da­ti­ons and writ­ten opi­ni­ons on any rele­vant mat­ters rela­ted to the imple­men­ta­ti­on of this Regu­la­ti­on and to its con­si­stent and effec­ti­ve appli­ca­ti­on, including:

(i) on the deve­lo­p­ment and appli­ca­ti­on of codes of con­duct and codes of prac­ti­ce pur­su­ant to this Regu­la­ti­on, as well as of the Commission’s guidelines;

(ii) the eva­lua­ti­on and review of this Regu­la­ti­on pur­su­ant to Artic­le 112, inclu­ding as regards the serious inci­dent reports refer­red to in Artic­le 73, and the func­tio­ning of the EU data­ba­se refer­red to in Artic­le 71, the pre­pa­ra­ti­on of the dele­ga­ted or imple­men­ting acts, and as regards pos­si­ble ali­gnments of this Regu­la­ti­on with the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Annex I;

(iii) on tech­ni­cal spe­ci­fi­ca­ti­ons or exi­sting stan­dards regar­ding the requi­re­ments set out in Chap­ter III, Sec­tion 2;

(iv) on the use of har­mo­ni­s­ed stan­dards or com­mon spe­ci­fi­ca­ti­ons refer­red to in Artic­les 40 and 41;

(v) trends, such as Euro­pean glo­bal com­pe­ti­ti­ve­ness in AI, the upt­ake of AI in the Uni­on, and the deve­lo­p­ment of digi­tal skills;

(vi) trends on the evol­ving typo­lo­gy of AI value chains, in par­ti­cu­lar on the resul­ting impli­ca­ti­ons in terms of accountability;

(vii) on the poten­ti­al need for amend­ment to Annex III in accordance with Artic­le 7, and on the poten­ti­al need for pos­si­ble revi­si­on of Artic­le 5 pur­su­ant to Artic­le 112, taking into account rele­vant available evi­dence and the latest deve­lo­p­ments in technology;

(f) sup­port the Com­mis­si­on in pro­mo­ting AI liter­a­cy, public awa­re­ness and under­stan­ding of the bene­fits, risks, safe­guards and rights and obli­ga­ti­ons in rela­ti­on to the use of AI systems;

(g) faci­li­ta­te the deve­lo­p­ment of com­mon cri­te­ria and a shared under­stan­ding among mar­ket ope­ra­tors and com­pe­tent aut­ho­ri­ties of the rele­vant con­cepts pro­vi­ded for in this Regu­la­ti­on, inclu­ding by con­tri­bu­ting to the deve­lo­p­ment of benchmarks;

(h) coope­ra­te, as appro­pria­te, with other Uni­on insti­tu­ti­ons, bodies, offices and agen­ci­es, as well as rele­vant Uni­on expert groups and net­works, in par­ti­cu­lar in the fields of pro­duct safe­ty, cyber­se­cu­ri­ty, com­pe­ti­ti­on, digi­tal and media ser­vices, finan­cial ser­vices, con­su­mer pro­tec­tion, data and fun­da­men­tal rights protection;

(i) con­tri­bu­te to effec­ti­ve coope­ra­ti­on with the com­pe­tent aut­ho­ri­ties of third count­ries and with inter­na­tio­nal organisations;

(j) assist natio­nal com­pe­tent aut­ho­ri­ties and the Com­mis­si­on in deve­lo­ping the orga­ni­sa­tio­nal and tech­ni­cal exper­ti­se requi­red for the imple­men­ta­ti­on of this Regu­la­ti­on, inclu­ding by con­tri­bu­ting to the assess­ment of trai­ning needs for staff of Mem­ber Sta­tes invol­ved in imple­men­ting this Regulation;

(k) assist the AI Office in sup­port­ing natio­nal com­pe­tent aut­ho­ri­ties in the estab­lish­ment and deve­lo­p­ment of AI regu­la­to­ry sand­bo­xes, and faci­li­ta­te coope­ra­ti­on and infor­ma­ti­on-sha­ring among AI regu­la­to­ry sandboxes;

(l) con­tri­bu­te to, and pro­vi­de rele­vant advice on, the deve­lo­p­ment of gui­dance documents;

(m) advi­se the Com­mis­si­on in rela­ti­on to inter­na­tio­nal mat­ters on AI;

(n) pro­vi­de opi­ni­ons to the Com­mis­si­on on the qua­li­fi­ed alerts regar­ding gene­ral-pur­po­se AI models;

(o) recei­ve opi­ni­ons by the Mem­ber Sta­tes on qua­li­fi­ed alerts regar­ding gene­ral-pur­po­se AI models, and on natio­nal expe­ri­en­ces and prac­ti­ces on the moni­to­ring and enforce­ment of AI systems, in par­ti­cu­lar systems inte­gra­ting the gene­ral-pur­po­se AI models.

Artic­le 67 Advi­so­ry forum

1. An advi­so­ry forum shall be estab­lished to pro­vi­de tech­ni­cal exper­ti­se and advi­se the Board and the Com­mis­si­on, and to con­tri­bu­te to their tasks under this Regulation.

2. The mem­ber­ship of the advi­so­ry forum shall repre­sent a balan­ced sel­ec­tion of stake­hol­ders, inclu­ding indu­stry, start-ups, SMEs, civil socie­ty and aca­de­mia. The mem­ber­ship of the advi­so­ry forum shall be balan­ced with regard to com­mer­cial and non-com­mer­cial inte­rests and, within the cate­go­ry of com­mer­cial inte­rests, with regard to SMEs and other undertakings.

3. The Com­mis­si­on shall appoint the mem­bers of the advi­so­ry forum, in accordance with the cri­te­ria set out in para­graph 2, from among­st stake­hol­ders with reco­g­nis­ed exper­ti­se in the field of AI.

4. The term of office of the mem­bers of the advi­so­ry forum shall be two years, which may be exten­ded by up to no more than four years.

5. The Fun­da­men­tal Rights Agen­cy, ENISA, the Euro­pean Com­mit­tee for Stan­dar­dizati­on (CEN), the Euro­pean Com­mit­tee for Elec­tro­tech­ni­cal Stan­dar­dizati­on (CENELEC), and the Euro­pean Tele­com­mu­ni­ca­ti­ons Stan­dards Insti­tu­te (ETSI) shall be per­ma­nent mem­bers of the advi­so­ry forum.

6. The advi­so­ry forum shall draw up its rules of pro­ce­du­re. It shall elect two co-chairs from among its mem­bers, in accordance with cri­te­ria set out in para­graph 2. The term of office of the co-chairs shall be two years, rene­wa­ble once.

7. The advi­so­ry forum shall hold mee­tings at least twice a year. The advi­so­ry forum may invi­te experts and other stake­hol­ders to its meetings.

8. The advi­so­ry forum may prepa­re opi­ni­ons, recom­men­da­ti­ons and writ­ten con­tri­bu­ti­ons at the request of the Board or the Commission.

9. The advi­so­ry forum may estab­lish stan­ding or tem­po­ra­ry sub-groups as appro­pria­te for the pur­po­se of exami­ning spe­ci­fic que­sti­ons rela­ted to the objec­ti­ves of this Regulation.

10. The advi­so­ry forum shall prepa­re an annu­al report on its acti­vi­ties. That report shall be made publicly available.

(150) With a view to ensu­ring the invol­vement of stake­hol­ders in the imple­men­ta­ti­on and appli­ca­ti­on of this Regu­la­ti­on, an advi­so­ry forum should be estab­lished to advi­se and pro­vi­de tech­ni­cal exper­ti­se to the Board and the Com­mis­si­on. To ensu­re a varied and balan­ced stake­hol­der repre­sen­ta­ti­on bet­ween com­mer­cial and non-com­mer­cial inte­rest and, within the cate­go­ry of com­mer­cial inte­rests, with regards to SMEs and other under­ta­kings, the advi­so­ry forum should com­pri­se inter alia indu­stry, start-ups, SMEs, aca­de­mia, civil socie­ty, inclu­ding the social part­ners, as well as the Fun­da­men­tal Rights Agen­cy, ENISA, the Euro­pean Com­mit­tee for Stan­dar­dizati­on (CEN), the Euro­pean Com­mit­tee for Elec­tro­tech­ni­cal Stan­dar­dizati­on (CENELEC) and the Euro­pean Tele­com­mu­ni­ca­ti­ons Stan­dards Insti­tu­te (ETSI).

Artic­le 68 Sci­en­ti­fic panel of inde­pen­dent experts

1. The Com­mis­si­on shall, by means of an imple­men­ting act, make pro­vi­si­ons on the estab­lish­ment of a sci­en­ti­fic panel of inde­pen­dent experts (the ‘sci­en­ti­fic panel’) inten­ded to sup­port the enforce­ment acti­vi­ties under this Regu­la­ti­on. That imple­men­ting act shall be adopted in accordance with the exami­na­ti­on pro­ce­du­re refer­red to in Artic­le 98(2).

2. The sci­en­ti­fic panel shall con­sist of experts sel­ec­ted by the Com­mis­si­on on the basis of up-to-date sci­en­ti­fic or tech­ni­cal exper­ti­se in the field of AI neces­sa­ry for the tasks set out in para­graph 3, and shall be able to demon­stra­te mee­ting all of the fol­lo­wing conditions:

(a) having par­ti­cu­lar exper­ti­se and com­pe­tence and sci­en­ti­fic or tech­ni­cal exper­ti­se in the field of AI; 

(b) inde­pen­dence from any pro­vi­der of AI systems or gene­ral-pur­po­se AI models;

(c) an abili­ty to car­ry out acti­vi­ties dili­gent­ly, accu­ra­te­ly and objectively.

The Com­mis­si­on, in con­sul­ta­ti­on with the Board, shall deter­mi­ne the num­ber of experts on the panel in accordance with the requi­red needs and shall ensu­re fair gen­der and geo­gra­phi­cal representation.

3. The sci­en­ti­fic panel shall advi­se and sup­port the AI Office, in par­ti­cu­lar with regard to the fol­lo­wing tasks:

(a) sup­port­ing the imple­men­ta­ti­on and enforce­ment of this Regu­la­ti­on as regards gene­ral-pur­po­se AI models and systems, in par­ti­cu­lar by:

(i) aler­ting the AI Office of pos­si­ble syste­mic risks at Uni­on level of gene­ral-pur­po­se AI models, in accordance with Artic­le 90;

(ii) con­tri­bu­ting to the deve­lo­p­ment of tools and metho­do­lo­gies for eva­lua­ting capa­bi­li­ties of gene­ral-pur­po­se AI models and systems, inclu­ding through benchmarks; 

(iii) pro­vi­ding advice on the clas­si­fi­ca­ti­on of gene­ral-pur­po­se AI models with syste­mic risk;

(iv) pro­vi­ding advice on the clas­si­fi­ca­ti­on of various gene­ral-pur­po­se AI models and systems;

(v) con­tri­bu­ting to the deve­lo­p­ment of tools and templates;

(b) sup­port­ing the work of mar­ket sur­veil­lan­ce aut­ho­ri­ties, at their request;

(c) sup­port­ing cross-bor­der mar­ket sur­veil­lan­ce acti­vi­ties as refer­red to in Artic­le 74(11), wit­hout pre­ju­di­ce to the powers of mar­ket sur­veil­lan­ce authorities;

(d) sup­port­ing the AI Office in car­ry­ing out its duties in the con­text of the Uni­on safe­guard pro­ce­du­re pur­su­ant to Artic­le 81.

(163) With a view to com­ple­men­ting the gover­nan­ce systems for gene­ral-pur­po­se AI models, the sci­en­ti­fic panel should sup­port the moni­to­ring acti­vi­ties of the AI Office and may, in cer­tain cases, pro­vi­de qua­li­fi­ed alerts to the AI Office which trig­ger fol­low-ups, such as inve­sti­ga­ti­ons. This should be the case whe­re the sci­en­ti­fic panel has rea­son to suspect that a gene­ral-pur­po­se AI model poses a con­cre­te and iden­ti­fia­ble risk at Uni­on level. Fur­ther­mo­re, this should be the case whe­re the sci­en­ti­fic panel has rea­son to suspect that a gene­ral-pur­po­se AI model meets the cri­te­ria that would lead to a clas­si­fi­ca­ti­on as gene­ral-pur­po­se AI model with syste­mic risk. To equip the sci­en­ti­fic panel with the infor­ma­ti­on neces­sa­ry for the per­for­mance of tho­se tasks, the­re should be a mecha­nism wher­eby the sci­en­ti­fic panel can request the Com­mis­si­on to requi­re docu­men­ta­ti­on or infor­ma­ti­on from a provider. 

4. The experts on the sci­en­ti­fic panel shall per­form their tasks with impar­tia­li­ty and objec­ti­vi­ty, and shall ensu­re the con­fi­den­tia­li­ty of infor­ma­ti­on and data obtai­ned in car­ry­ing out their tasks and acti­vi­ties. They shall neither seek nor take ins­truc­tions from anyo­ne when exer­cis­ing their tasks under para­graph 3. Each expert shall draw up a decla­ra­ti­on of inte­rests, which shall be made publicly available. The AI Office shall estab­lish systems and pro­ce­du­res to actively mana­ge and pre­vent poten­ti­al con­flicts of interest.

5. The imple­men­ting act refer­red to in para­graph 1 shall include pro­vi­si­ons on the con­di­ti­ons, pro­ce­du­res and detail­ed arran­ge­ments for the sci­en­ti­fic panel and its mem­bers to issue alerts, and to request the assi­stance of the AI Office for the per­for­mance of the tasks of the sci­en­ti­fic panel.

(151) To sup­port the imple­men­ta­ti­on and enforce­ment of this Regu­la­ti­on, in par­ti­cu­lar the moni­to­ring acti­vi­ties of the AI Office as regards gene­ral-pur­po­se AI models, a sci­en­ti­fic panel of inde­pen­dent experts should be estab­lished. The inde­pen­dent experts con­sti­tu­ting the sci­en­ti­fic panel should be sel­ec­ted on the basis of up-to-date sci­en­ti­fic or tech­ni­cal exper­ti­se in the field of AI and should per­form their tasks with impar­tia­li­ty, objec­ti­vi­ty and ensu­re the con­fi­den­tia­li­ty of infor­ma­ti­on and data obtai­ned in car­ry­ing out their tasks and acti­vi­ties. To allow the rein­force­ment of natio­nal capa­ci­ties neces­sa­ry for the effec­ti­ve enforce­ment of this Regu­la­ti­on, Mem­ber Sta­tes should be able to request sup­port from the pool of experts con­sti­tu­ting the sci­en­ti­fic panel for their enforce­ment activities.

Artic­le 69 Access to the pool of experts by the Mem­ber States

1. Mem­ber Sta­tes may call upon experts of the sci­en­ti­fic panel to sup­port their enforce­ment acti­vi­ties under this Regulation.

2. The Mem­ber Sta­tes may be requi­red to pay fees for the advice and sup­port pro­vi­ded by the experts. The struc­tu­re and the level of fees as well as the sca­le and struc­tu­re of reco­vera­ble costs shall be set out in the imple­men­ting act refer­red to in Artic­le 68(1), taking into account the objec­ti­ves of the ade­qua­te imple­men­ta­ti­on of this Regu­la­ti­on, cost-effec­ti­ve­ness and the neces­si­ty of ensu­ring effec­ti­ve access to experts for all Mem­ber States.

3. The Com­mis­si­on shall faci­li­ta­te time­ly access to the experts by the Mem­ber Sta­tes, as nee­ded, and ensu­re that the com­bi­na­ti­on of sup­port acti­vi­ties car­ri­ed out by Uni­on AI test­ing sup­port pur­su­ant to Artic­le 84 and experts pur­su­ant to this Artic­le is effi­ci­ent­ly orga­ni­s­ed and pro­vi­des the best pos­si­ble added value. 

Sec­tion 2 Natio­nal Com­pe­tent Authorities

Artic­le 70 Desi­gna­ti­on of natio­nal com­pe­tent aut­ho­ri­ties and sin­gle points of contact

1. Each Mem­ber Sta­te shall estab­lish or desi­gna­te as natio­nal com­pe­tent aut­ho­ri­ties at least one noti­fy­ing aut­ho­ri­ty and at least one mar­ket sur­veil­lan­ce aut­ho­ri­ty for the pur­po­ses of this Regu­la­ti­on. Tho­se natio­nal com­pe­tent aut­ho­ri­ties shall exer­cise their powers inde­pendent­ly, impar­ti­al­ly and wit­hout bias so as to safe­guard the objec­ti­vi­ty of their acti­vi­ties and tasks, and to ensu­re the appli­ca­ti­on and imple­men­ta­ti­on of this Regu­la­ti­on. The mem­bers of tho­se aut­ho­ri­ties shall refrain from any action incom­pa­ti­ble with their duties. Pro­vi­ded that tho­se prin­ci­ples are obser­ved, such acti­vi­ties and tasks may be per­for­med by one or more desi­gna­ted aut­ho­ri­ties, in accordance with the orga­ni­sa­tio­nal needs of the Mem­ber State.

(153) Mem­ber Sta­tes hold a key role in the appli­ca­ti­on and enforce­ment of this Regu­la­ti­on. In that respect, each Mem­ber Sta­te should desi­gna­te at least one noti­fy­ing aut­ho­ri­ty and at least one mar­ket sur­veil­lan­ce aut­ho­ri­ty as natio­nal com­pe­tent aut­ho­ri­ties for the pur­po­se of super­vi­sing the appli­ca­ti­on and imple­men­ta­ti­on of this Regu­la­ti­on. Mem­ber Sta­tes may deci­de to appoint any kind of public enti­ty to per­form the tasks of the natio­nal com­pe­tent aut­ho­ri­ties within the mea­ning of this Regu­la­ti­on, in accordance with their spe­ci­fic natio­nal orga­ni­sa­tio­nal cha­rac­te­ri­stics and needs. In order to increa­se orga­ni­sa­ti­on effi­ci­en­cy on the side of Mem­ber Sta­tes and to set a sin­gle point of cont­act vis-à-vis the public and other coun­ter­parts at Mem­ber Sta­te and Uni­on levels, each Mem­ber Sta­te should desi­gna­te a mar­ket sur­veil­lan­ce aut­ho­ri­ty to act as a sin­gle point of contact.

(154) The natio­nal com­pe­tent aut­ho­ri­ties should exer­cise their powers inde­pendent­ly, impar­ti­al­ly and wit­hout bias, so as to safe­guard the prin­ci­ples of objec­ti­vi­ty of their acti­vi­ties and tasks and to ensu­re the appli­ca­ti­on and imple­men­ta­ti­on of this Regu­la­ti­on. The mem­bers of the­se aut­ho­ri­ties should refrain from any action incom­pa­ti­ble with their duties and should be sub­ject to con­fi­den­tia­li­ty rules under this Regulation.

(156) In order to ensu­re an appro­pria­te and effec­ti­ve enforce­ment of the requi­re­ments and obli­ga­ti­ons set out by this Regu­la­ti­on, which is Uni­on har­mo­ni­sa­ti­on legis­la­ti­on, the system of mar­ket sur­veil­lan­ce and com­pli­ance of pro­ducts estab­lished by Regu­la­ti­on (EU) 2019/1020 should app­ly in its enti­re­ty. Mar­ket sur­veil­lan­ce aut­ho­ri­ties desi­gna­ted pur­su­ant to this Regu­la­ti­on should have all enforce­ment powers laid down in this Regu­la­ti­on and in Regu­la­ti­on (EU) 2019/1020 and should exer­cise their powers and car­ry out their duties inde­pendent­ly, impar­ti­al­ly and wit­hout bias. Alt­hough the majo­ri­ty of AI systems are not sub­ject to spe­ci­fic requi­re­ments and obli­ga­ti­ons under this Regu­la­ti­on, mar­ket sur­veil­lan­ce aut­ho­ri­ties may take mea­su­res in rela­ti­on to all AI systems when they pre­sent a risk in accordance with this Regu­la­ti­on. Due to the spe­ci­fic natu­re of Uni­on insti­tu­ti­ons, agen­ci­es and bodies fal­ling within the scope of this Regu­la­ti­on, it is appro­pria­te to desi­gna­te the Euro­pean Data Pro­tec­tion Super­vi­sor as a com­pe­tent mar­ket sur­veil­lan­ce aut­ho­ri­ty for them. This should be wit­hout pre­ju­di­ce to the desi­gna­ti­on of natio­nal com­pe­tent aut­ho­ri­ties by the Mem­ber Sta­tes. Mar­ket sur­veil­lan­ce acti­vi­ties should not affect the abili­ty of the super­vi­sed enti­ties to car­ry out their tasks inde­pendent­ly, when such inde­pen­dence is requi­red by Uni­on law.

2. Mem­ber Sta­tes shall com­mu­ni­ca­te to the Com­mis­si­on the iden­ti­ty of the noti­fy­ing aut­ho­ri­ties and the mar­ket sur­veil­lan­ce aut­ho­ri­ties and the tasks of tho­se aut­ho­ri­ties, as well as any sub­se­quent chan­ges the­re­to. Mem­ber Sta­tes shall make publicly available infor­ma­ti­on on how com­pe­tent aut­ho­ri­ties and sin­gle points of cont­act can be cont­ac­ted, through elec­tro­nic com­mu­ni­ca­ti­on means by… [12 months from the date of ent­ry into force of this Regu­la­ti­on]. Mem­ber Sta­tes shall desi­gna­te a mar­ket sur­veil­lan­ce aut­ho­ri­ty to act as the sin­gle point of cont­act for this Regu­la­ti­on, and shall noti­fy the Com­mis­si­on of the iden­ti­ty of the sin­gle point of cont­act. The Com­mis­si­on shall make a list of the sin­gle points of cont­act publicly available.

3. Mem­ber Sta­tes shall ensu­re that their natio­nal com­pe­tent aut­ho­ri­ties are pro­vi­ded with ade­qua­te tech­ni­cal, finan­cial and human resour­ces, and with infras­truc­tu­re to ful­fil their tasks effec­tively under this Regu­la­ti­on. In par­ti­cu­lar, the natio­nal com­pe­tent aut­ho­ri­ties shall have a suf­fi­ci­ent num­ber of per­son­nel per­ma­nent­ly available who­se com­pe­ten­ces and exper­ti­se shall include an in-depth under­stan­ding of AI tech­no­lo­gies, data and data com­pu­ting, per­so­nal data pro­tec­tion, cyber­se­cu­ri­ty, fun­da­men­tal rights, health and safe­ty risks and know­ledge of exi­sting stan­dards and legal requi­re­ments. Mem­ber Sta­tes shall assess and, if neces­sa­ry, update com­pe­tence and resour­ce requi­re­ments refer­red to in this para­graph on an annu­al basis.

4. Natio­nal com­pe­tent aut­ho­ri­ties shall take appro­pria­te mea­su­res to ensu­re an ade­qua­te level of cybersecurity.

5. When per­forming their tasks, the natio­nal com­pe­tent aut­ho­ri­ties shall act in accordance with the con­fi­den­tia­li­ty obli­ga­ti­ons set out in Artic­le 78.

6. By … [one year from the date of ent­ry into force of this Regu­la­ti­on], and once every two years the­re­af­ter, Mem­ber Sta­tes shall report to the Com­mis­si­on on the sta­tus of the finan­cial and human resour­ces of the natio­nal com­pe­tent aut­ho­ri­ties, with an assess­ment of their ade­qua­cy. The Com­mis­si­on shall trans­mit that infor­ma­ti­on to the Board for dis­cus­sion and pos­si­ble recommendations.

7. The Com­mis­si­on shall faci­li­ta­te the exch­an­ge of expe­ri­ence bet­ween natio­nal com­pe­tent authorities.

8. Natio­nal com­pe­tent aut­ho­ri­ties may pro­vi­de gui­dance and advice on the imple­men­ta­ti­on of this Regu­la­ti­on, in par­ti­cu­lar to SMEs inclu­ding start-ups, taking into account the gui­dance and advice of the Board and the Com­mis­si­on, as appro­pria­te. When­ever natio­nal com­pe­tent aut­ho­ri­ties intend to pro­vi­de gui­dance and advice with regard to an AI system in are­as cover­ed by other Uni­on law, the natio­nal com­pe­tent aut­ho­ri­ties under that Uni­on law shall be con­sul­ted, as appropriate. 

9. Whe­re Uni­on insti­tu­ti­ons, bodies, offices or agen­ci­es fall within the scope of this Regu­la­ti­on, the Euro­pean Data Pro­tec­tion Super­vi­sor shall act as the com­pe­tent aut­ho­ri­ty for their supervision.

(157) This Regu­la­ti­on is wit­hout pre­ju­di­ce to the com­pe­ten­ces, tasks, powers and inde­pen­dence of rele­vant natio­nal public aut­ho­ri­ties or bodies which super­vi­se the appli­ca­ti­on of Uni­on law pro­tec­ting fun­da­men­tal rights, inclu­ding equa­li­ty bodies and data pro­tec­tion aut­ho­ri­ties. Whe­re neces­sa­ry for their man­da­te, tho­se natio­nal public aut­ho­ri­ties or bodies should also have access to any docu­men­ta­ti­on crea­ted under this Regu­la­ti­on. A spe­ci­fic safe­guard pro­ce­du­re should be set for ensu­ring ade­qua­te and time­ly enforce­ment against AI systems pre­sen­ting a risk to health, safe­ty and fun­da­men­tal rights. The pro­ce­du­re for such AI systems pre­sen­ting a risk should be applied to high-risk AI systems pre­sen­ting a risk, pro­hi­bi­ted systems which have been pla­ced on the mar­ket, put into ser­vice or used in vio­la­ti­on of the pro­hi­bi­ted prac­ti­ces laid down in this Regu­la­ti­on and AI systems which have been made available in vio­la­ti­on of the trans­pa­ren­cy requi­re­ments laid down in this Regu­la­ti­on and pre­sent a risk. 

Artic­le 71 EU data­ba­se for high-risk AI systems listed in Annex III

1. The Com­mis­si­on shall, in col­la­bo­ra­ti­on with the Mem­ber Sta­tes, set up and main­tain an EU data­ba­se con­tai­ning infor­ma­ti­on refer­red to in para­graphs 2 and 3 of this Artic­le con­cer­ning high-risk AI systems refer­red to in Artic­le 6(2) which are regi­stered in accordance with Artic­les 49 and 60 and AI systems that are not con­side­red as high-risk pur­su­ant to Artic­le 6(3) and which are regi­stered in accordance with Artic­le 6(4) and Artic­le 49. When set­ting the func­tion­al spe­ci­fi­ca­ti­ons of such data­ba­se, the Com­mis­si­on shall con­sult the rele­vant experts, and when updating the func­tion­al spe­ci­fi­ca­ti­ons of such data­ba­se, the Com­mis­si­on shall con­sult the Board.

2. The data listed in Sec­tions A and B of Annex VIII shall be ente­red into the EU data­ba­se by the pro­vi­der or, whe­re appli­ca­ble, by the aut­ho­ri­sed representative.

3. The data listed in Sec­tion C of Annex VIII shall be ente­red into the EU data­ba­se by the deployer who is, or who acts on behalf of, a public aut­ho­ri­ty, agen­cy or body, in accordance with Artic­le 49(3) and (4).

4. With the excep­ti­on of the sec­tion refer­red to in Artic­le 49(4) and Artic­le 60(4), point (c), the infor­ma­ti­on con­tai­ned in the EU data­ba­se regi­stered in accordance with Artic­le 49 shall be acce­s­si­ble and publicly available in a user-fri­end­ly man­ner. The infor­ma­ti­on should be easi­ly navigab­le and machi­ne-rea­da­ble. The infor­ma­ti­on regi­stered in accordance with Artic­le 60 shall be acce­s­si­ble only to mar­ket sur­veil­lan­ce aut­ho­ri­ties and the Com­mis­si­on, unless the pro­s­pec­ti­ve pro­vi­der or pro­vi­der has given con­sent for also making the infor­ma­ti­on acce­s­si­ble the public.

5. The EU data­ba­se shall con­tain per­so­nal data only in so far as neces­sa­ry for coll­ec­ting and pro­ce­s­sing infor­ma­ti­on in accordance with this Regu­la­ti­on. That infor­ma­ti­on shall include the names and cont­act details of natu­ral per­sons who are respon­si­ble for regi­stering the system and have the legal aut­ho­ri­ty to repre­sent the pro­vi­der or the deployer, as applicable.

6. The Com­mis­si­on shall be the con­trol­ler of the EU data­ba­se. It shall make available to pro­vi­ders, pro­s­pec­ti­ve pro­vi­ders and deployers ade­qua­te tech­ni­cal and admi­ni­stra­ti­ve sup­port. The EU data­ba­se shall com­ply with the appli­ca­ble acce­s­si­bi­li­ty requirements. 

Chap­ter IX Post mar­ket moni­to­ring infor­ma­ti­on sha­ring and mar­ket surveillance

Sec­tion 1 Post-Mar­ket Monitoring

Artic­le 72 Post-mar­ket moni­to­ring by pro­vi­ders and post-mar­ket moni­to­ring plan for high-risk AI systems

1. Pro­vi­ders shall estab­lish and docu­ment a post-mar­ket moni­to­ring system in a man­ner that is pro­por­tio­na­te to the natu­re of the AI tech­no­lo­gies and the risks of the high-risk AI system.

2. The post-mar­ket moni­to­ring system shall actively and syste­ma­ti­cal­ly coll­ect, docu­ment and ana­ly­se rele­vant data which may be pro­vi­ded by deployers or which may be coll­ec­ted through other sources on the per­for­mance of high-risk AI systems throug­hout their life­time, and which allow the pro­vi­der to eva­lua­te the con­ti­nuous com­pli­ance of AI systems with the requi­re­ments set out in Chap­ter III, Sec­tion 2. Whe­re rele­vant, post-mar­ket moni­to­ring shall include an ana­ly­sis of the inter­ac­tion with other AI systems. This obli­ga­ti­on shall not cover sen­si­ti­ve ope­ra­tio­nal data of deployers which are law-enforce­ment authorities.

3. The post-mar­ket moni­to­ring system shall be based on a post-mar­ket moni­to­ring plan. The post-mar­ket moni­to­ring plan shall be part of the tech­ni­cal docu­men­ta­ti­on refer­red to in Annex IV. The Com­mis­si­on shall adopt an imple­men­ting act lay­ing down detail­ed pro­vi­si­ons estab­li­shing a tem­p­la­te for the post-mar­ket moni­to­ring plan and the list of ele­ments to be inclu­ded in the plan by … [18 months after the ent­ry into force of this Regu­la­ti­on]. That imple­men­ting act shall be adopted in accordance with the exami­na­ti­on pro­ce­du­re refer­red to in Artic­le 98(2).

4. For high-risk AI systems cover­ed by the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Sec­tion A of Annex I, whe­re a post-mar­ket moni­to­ring system and plan are alre­a­dy estab­lished under that legis­la­ti­on, in order to ensu­re con­si­sten­cy, avo­id dupli­ca­ti­ons and mini­mi­se addi­tio­nal bur­dens, pro­vi­ders shall have a choice of inte­gra­ting, as appro­pria­te, the neces­sa­ry ele­ments descri­bed in para­graphs 1, 2 and 3 using the tem­p­la­te refer­red in para­graph 3 into systems and plans alre­a­dy exi­sting under that legis­la­ti­on, pro­vi­ded that it achie­ves an equi­va­lent level of protection.

The first sub­pa­ra­graph of this para­graph shall also app­ly to high-risk AI systems refer­red to in point 5 of Annex III pla­ced on the mar­ket or put into ser­vice by finan­cial insti­tu­ti­ons that are sub­ject to requi­re­ments under Uni­on finan­cial ser­vices law regar­ding their inter­nal gover­nan­ce, arran­ge­ments or processes.

(155) In order to ensu­re that pro­vi­ders of high-risk AI systems can take into account the expe­ri­ence on the use of high-risk AI systems for impro­ving their systems and the design and deve­lo­p­ment pro­cess or can take any pos­si­ble cor­rec­ti­ve action in a time­ly man­ner, all pro­vi­ders should have a post-mar­ket moni­to­ring system in place. Whe­re rele­vant, post-mar­ket moni­to­ring should include an ana­ly­sis of the inter­ac­tion with other AI systems inclu­ding other devices and soft­ware. Post-mar­ket moni­to­ring should not cover sen­si­ti­ve ope­ra­tio­nal data of deployers which are law enforce­ment aut­ho­ri­ties. This system is also key to ensu­re that the pos­si­ble risks emer­ging from AI systems which con­ti­n­ue to ‘learn’ after being pla­ced on the mar­ket or put into ser­vice can be more effi­ci­ent­ly and time­ly addres­sed. In this con­text, pro­vi­ders should also be requi­red to have a system in place to report to the rele­vant aut­ho­ri­ties any serious inci­dents resul­ting from the use of their AI systems, mea­ning inci­dent or mal­func­tio­ning lea­ding to death or serious dama­ge to health, serious and irrever­si­ble dis­rup­ti­on of the manage­ment and ope­ra­ti­on of cri­ti­cal infras­truc­tu­re, inf­rin­ge­ments of obli­ga­ti­ons under Uni­on law inten­ded to pro­tect fun­da­men­tal rights or serious dama­ge to pro­per­ty or the environment. 

Sec­tion 2 Sha­ring Of Infor­ma­ti­on On Serious Incidents

Artic­le 73 Report­ing of serious incidents

1. Pro­vi­ders of high-risk AI systems pla­ced on the Uni­on mar­ket shall report any serious inci­dent to the mar­ket sur­veil­lan­ce aut­ho­ri­ties of the Mem­ber Sta­tes whe­re that inci­dent occurred.

2. The report refer­red to in para­graph 1 shall be made imme­dia­te­ly after the pro­vi­der has estab­lished a cau­sal link bet­ween the AI system and the serious inci­dent or the rea­sonable likeli­hood of such a link, and, in any event, not later than 15 days after the pro­vi­der or, whe­re appli­ca­ble, the deployer, beco­mes awa­re of the serious incident.

The peri­od for the report­ing refer­red to in the first sub­pa­ra­graph shall take account of the seve­ri­ty of the serious incident.

3. Not­wi­th­stan­ding para­graph 2 of this Artic­le, in the event of a wide­spread inf­rin­ge­ment or a serious inci­dent as defi­ned in Artic­le 3, point (49) (b), the report refer­red to in para­graph 1 of this Artic­le shall be pro­vi­ded imme­dia­te­ly, and not later than two days after the pro­vi­der or, whe­re appli­ca­ble, the deployer beco­mes awa­re of that incident.

4. Not­wi­th­stan­ding para­graph 2, in the event of the death of a per­son, the report shall be pro­vi­ded imme­dia­te­ly after the pro­vi­der or the deployer has estab­lished, or as soon as it suspects, a cau­sal rela­ti­on­ship bet­ween the high-risk AI system and the serious inci­dent, but not later than 10 days after the date on which the pro­vi­der or, whe­re appli­ca­ble, the deployer beco­mes awa­re of the serious incident.

5. Whe­re neces­sa­ry to ensu­re time­ly report­ing, the pro­vi­der or, whe­re appli­ca­ble, the deployer, may sub­mit an initi­al report that is incom­ple­te, fol­lo­wed by a com­ple­te report.

6. Fol­lo­wing the report­ing of a serious inci­dent pur­su­ant to para­graph 1, the pro­vi­der shall, wit­hout delay, per­form the neces­sa­ry inve­sti­ga­ti­ons in rela­ti­on to the serious inci­dent and the AI system con­cer­ned. This shall include a risk assess­ment of the inci­dent, and cor­rec­ti­ve action.

The pro­vi­der shall coope­ra­te with the com­pe­tent aut­ho­ri­ties, and whe­re rele­vant with the noti­fi­ed body con­cer­ned, during the inve­sti­ga­ti­ons refer­red to in the first sub­pa­ra­graph, and shall not per­form any inve­sti­ga­ti­on which invol­ves alte­ring the AI system con­cer­ned in a way which may affect any sub­se­quent eva­lua­ti­on of the cau­ses of the inci­dent, pri­or to informing the com­pe­tent aut­ho­ri­ties of such action.

7. Upon recei­ving a noti­fi­ca­ti­on rela­ted to a serious inci­dent refer­red to in Artic­le 3, point (49)(c), the rele­vant mar­ket sur­veil­lan­ce aut­ho­ri­ty shall inform the natio­nal public aut­ho­ri­ties or bodies refer­red to in Artic­le 77(1). The Com­mis­si­on shall deve­lop dedi­ca­ted gui­dance to faci­li­ta­te com­pli­ance with the obli­ga­ti­ons set out in para­graph 1 of this Artic­le. That gui­dance shall be issued by … [12 months after the ent­ry into force of this Regu­la­ti­on], and shall be asses­sed regularly.

8. The mar­ket sur­veil­lan­ce aut­ho­ri­ty shall take appro­pria­te mea­su­res, as pro­vi­ded for in Artic­le 19 of Regu­la­ti­on (EU) 2019/1020, within seven days from the date it recei­ved the noti­fi­ca­ti­on refer­red to in para­graph 1 of this Artic­le, and shall fol­low the noti­fi­ca­ti­on pro­ce­du­res as pro­vi­ded in that Regulation.

9. For high-risk AI systems refer­red to in Annex III that are pla­ced on the mar­ket or put into ser­vice by pro­vi­ders that are sub­ject to Uni­on legis­la­ti­ve instru­ments lay­ing down report­ing obli­ga­ti­ons equi­va­lent to tho­se set out in this Regu­la­ti­on , the noti­fi­ca­ti­on of serious inci­dents shall be limi­t­ed to tho­se refer­red to in Artic­le 3, point (49)(c).

10. For high-risk AI systems which are safe­ty com­pon­ents of devices, or are them­sel­ves devices, cover­ed by Regu­la­ti­ons (EU) 2017/745 and (EU) 2017/746, the noti­fi­ca­ti­on of serious inci­dents shall be limi­t­ed to tho­se refer­red to in Artic­le 3, point (49)(c) of this Regu­la­ti­on, and shall be made to the natio­nal com­pe­tent aut­ho­ri­ty cho­sen for that pur­po­se by the Mem­ber Sta­tes whe­re the inci­dent occurred.

11. Natio­nal com­pe­tent aut­ho­ri­ties shall imme­dia­te­ly noti­fy the Com­mis­si­on of any serious inci­dent, whe­ther or not they have taken action on it, in accordance with Artic­le 20 of Regu­la­ti­on (EU) 2019/1020.

Sec­tion 3 Enforcement

Artic­le 74 Mar­ket sur­veil­lan­ce and con­trol of AI systems in the Uni­on market

1. Regu­la­ti­on (EU) 2019/1020 shall app­ly to AI systems cover­ed by this Regu­la­ti­on. For the pur­po­ses of the effec­ti­ve enforce­ment of this Regulation:

(a) any refe­rence to an eco­no­mic ope­ra­tor under Regu­la­ti­on (EU) 2019/1020 shall be under­s­tood as inclu­ding all ope­ra­tors iden­ti­fi­ed in Artic­le 2(1) of this Regulation;

(b) any refe­rence to a pro­duct under Regu­la­ti­on (EU) 2019/1020 shall be under­s­tood as inclu­ding all AI systems fal­ling within the scope of this Regulation. 

2. As part of their report­ing obli­ga­ti­ons under Artic­le 34(4) of Regu­la­ti­on (EU) 2019/1020, the mar­ket sur­veil­lan­ce aut­ho­ri­ties shall report annu­al­ly to the Com­mis­si­on and rele­vant natio­nal com­pe­ti­ti­on aut­ho­ri­ties any infor­ma­ti­on iden­ti­fi­ed in the cour­se of mar­ket sur­veil­lan­ce acti­vi­ties that may be of poten­ti­al inte­rest for the appli­ca­ti­on of Uni­on law on com­pe­ti­ti­on rules. They shall also annu­al­ly report to the Com­mis­si­on about the use of pro­hi­bi­ted prac­ti­ces that occur­red during that year and about the mea­su­res taken.

3. For high-risk AI systems rela­ted to pro­ducts cover­ed by the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Sec­tion A of Annex I, the mar­ket sur­veil­lan­ce aut­ho­ri­ty for the pur­po­ses of this Regu­la­ti­on shall be the aut­ho­ri­ty respon­si­ble for mar­ket sur­veil­lan­ce acti­vi­ties desi­gna­ted under tho­se legal acts.

By dero­ga­ti­on from the first sub­pa­ra­graph, and in appro­pria­te cir­cum­stances, Mem­ber Sta­tes may desi­gna­te ano­ther rele­vant aut­ho­ri­ty to act as a mar­ket sur­veil­lan­ce aut­ho­ri­ty, pro­vi­ded they ensu­re coor­di­na­ti­on with the rele­vant sec­to­ral mar­ket sur­veil­lan­ce aut­ho­ri­ties respon­si­ble for the enforce­ment of the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Annex I.

4. The pro­ce­du­res refer­red to in Artic­les 79 to 83 of this Regu­la­ti­on shall not app­ly to AI systems rela­ted to pro­ducts cover­ed by the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in sec­tion A of Annex I, whe­re such legal acts alre­a­dy pro­vi­de for pro­ce­du­res ensu­ring an equi­va­lent level of pro­tec­tion and having the same objec­ti­ve. In such cases, the rele­vant sec­to­ral pro­ce­du­res shall app­ly instead.

5. Wit­hout pre­ju­di­ce to the powers of mar­ket sur­veil­lan­ce aut­ho­ri­ties under Artic­le 14 of Regu­la­ti­on (EU) 2019/1020, for the pur­po­se of ensu­ring the effec­ti­ve enforce­ment of this Regu­la­ti­on, mar­ket sur­veil­lan­ce aut­ho­ri­ties may exer­cise the powers refer­red to in Artic­le 14(4), points (d) and (j), of that Regu­la­ti­on remo­te­ly, as appropriate.

6. For high-risk AI systems pla­ced on the mar­ket, put into ser­vice, or used by finan­cial insti­tu­ti­ons regu­la­ted by Uni­on finan­cial ser­vices law, the mar­ket sur­veil­lan­ce aut­ho­ri­ty for the pur­po­ses of this Regu­la­ti­on shall be the rele­vant natio­nal aut­ho­ri­ty respon­si­ble for the finan­cial super­vi­si­on of tho­se insti­tu­ti­ons under that legis­la­ti­on in so far as the pla­cing on the mar­ket, put­ting into ser­vice, or the use of the AI system is in direct con­nec­tion with the pro­vi­si­on of tho­se finan­cial services.

(157) This Regu­la­ti­on is wit­hout pre­ju­di­ce to the com­pe­ten­ces, tasks, powers and inde­pen­dence of rele­vant natio­nal public aut­ho­ri­ties or bodies which super­vi­se the appli­ca­ti­on of Uni­on law pro­tec­ting fun­da­men­tal rights, inclu­ding equa­li­ty bodies and data pro­tec­tion aut­ho­ri­ties. Whe­re neces­sa­ry for their man­da­te, tho­se natio­nal public aut­ho­ri­ties or bodies should also have access to any docu­men­ta­ti­on crea­ted under this Regu­la­ti­on. A spe­ci­fic safe­guard pro­ce­du­re should be set for ensu­ring ade­qua­te and time­ly enforce­ment against AI systems pre­sen­ting a risk to health, safe­ty and fun­da­men­tal rights. The pro­ce­du­re for such AI systems pre­sen­ting a risk should be applied to high-risk AI systems pre­sen­ting a risk, pro­hi­bi­ted systems which have been pla­ced on the mar­ket, put into ser­vice or used in vio­la­ti­on of the pro­hi­bi­ted prac­ti­ces laid down in this Regu­la­ti­on and AI systems which have been made available in vio­la­ti­on of the trans­pa­ren­cy requi­re­ments laid down in this Regu­la­ti­on and pre­sent a risk. 

(158) Uni­on finan­cial ser­vices law inclu­des inter­nal gover­nan­ce and risk-manage­ment rules and requi­re­ments which are appli­ca­ble to regu­la­ted finan­cial insti­tu­ti­ons in the cour­se of pro­vi­si­on of tho­se ser­vices, inclu­ding when they make use of AI systems. In order to ensu­re coher­ent appli­ca­ti­on and enforce­ment of the obli­ga­ti­ons under this Regu­la­ti­on and rele­vant rules and requi­re­ments of the Uni­on finan­cial ser­vices legal acts, the com­pe­tent aut­ho­ri­ties for the super­vi­si­on and enforce­ment of tho­se legal acts, in par­ti­cu­lar com­pe­tent aut­ho­ri­ties as defi­ned in Regu­la­ti­on (EU) No 575/2013 of the Euro­pean Par­lia­ment and of the Council46 and Direc­ti­ves 2008/48/EC47, 2009/138/EC48, 2013/36/EU49, 2014/17/EU50 and (EU) 2016/9751 of the Euro­pean Par­lia­ment and of the Coun­cil, should be desi­gna­ted, within their respec­ti­ve com­pe­ten­ces, as com­pe­tent aut­ho­ri­ties for the pur­po­se of super­vi­sing the imple­men­ta­ti­on of this Regu­la­ti­on, inclu­ding for mar­ket sur­veil­lan­ce acti­vi­ties, as regards AI systems pro­vi­ded or used by regu­la­ted and super­vi­sed finan­cial insti­tu­ti­ons unless Mem­ber Sta­tes deci­de to desi­gna­te ano­ther aut­ho­ri­ty to ful­fil the­se mar­ket sur­veil­lan­ce tasks.

Tho­se com­pe­tent aut­ho­ri­ties should have all powers under this Regu­la­ti­on and Regu­la­ti­on (EU) 2019/1020 to enforce the requi­re­ments and obli­ga­ti­ons of this Regu­la­ti­on, inclu­ding powers to car­ry our ex post mar­ket sur­veil­lan­ce acti­vi­ties that can be inte­gra­ted, as appro­pria­te, into their exi­sting super­vi­so­ry mecha­nisms and pro­ce­du­res under the rele­vant Uni­on finan­cial ser­vices law. It is appro­pria­te to envi­sa­ge that, when acting as mar­ket sur­veil­lan­ce aut­ho­ri­ties under this Regu­la­ti­on, the natio­nal aut­ho­ri­ties respon­si­ble for the super­vi­si­on of cre­dit insti­tu­ti­ons regu­la­ted under Direc­ti­ve 2013/36/EU, which are par­ti­ci­pa­ting in the Sin­gle Super­vi­so­ry Mecha­nism estab­lished by Coun­cil Regu­la­ti­on (EU) No 1024/2013 , should report, wit­hout delay, to the Euro­pean Cen­tral Bank any infor­ma­ti­on iden­ti­fi­ed in the cour­se of their mar­ket sur­veil­lan­ce acti­vi­ties that may be of poten­ti­al inte­rest for the Euro­pean Cen­tral Bank’s pru­den­ti­al super­vi­so­ry tasks as spe­ci­fi­ed in that Regulation.

To fur­ther enhan­ce the con­si­sten­cy bet­ween this Regu­la­ti­on and the rules appli­ca­ble to cre­dit insti­tu­ti­ons regu­la­ted under Direc­ti­ve 2013/36/EU, it is also appro­pria­te to inte­gra­te some of the pro­vi­ders’ pro­ce­du­ral obli­ga­ti­ons in rela­ti­on to risk manage­ment, post mar­ke­ting moni­to­ring and docu­men­ta­ti­on into the exi­sting obli­ga­ti­ons and pro­ce­du­res under Direc­ti­ve 2013/36/EU. In order to avo­id over­laps, limi­t­ed dero­ga­ti­ons should also be envi­sa­ged in rela­ti­on to the qua­li­ty manage­ment system of pro­vi­ders and the moni­to­ring obli­ga­ti­on pla­ced on deployers of high-risk AI systems to the ext­ent that the­se app­ly to cre­dit insti­tu­ti­ons regu­la­ted by Direc­ti­ve 2013/36/EU. The same regime should app­ly to insu­rance and re-insu­rance under­ta­kings and insu­rance hol­ding com­pa­nies under Direc­ti­ve 2009/138/EC and the insu­rance inter­me­dia­ries under Direc­ti­ve (EU) 2016/97 and other types of finan­cial insti­tu­ti­ons sub­ject to requi­re­ments regar­ding inter­nal gover­nan­ce, arran­ge­ments or pro­ce­s­ses estab­lished pur­su­ant to the rele­vant Uni­on finan­cial ser­vices law to ensu­re con­si­sten­cy and equal tre­at­ment in the finan­cial sector. 

7. By way of dero­ga­ti­on from para­graph 6, in appro­pria­te cir­cum­stances, and pro­vi­ded that coor­di­na­ti­on is ensu­red, ano­ther rele­vant aut­ho­ri­ty may be iden­ti­fi­ed by the Mem­ber Sta­te as mar­ket sur­veil­lan­ce aut­ho­ri­ty for the pur­po­ses of this Regulation.

Natio­nal mar­ket sur­veil­lan­ce aut­ho­ri­ties super­vi­sing regu­la­ted cre­dit insti­tu­ti­ons regu­la­ted under Direc­ti­ve 2013/36/EU, which are par­ti­ci­pa­ting in the Sin­gle Super­vi­so­ry Mecha­nism estab­lished by Regu­la­ti­on (EU) No 1024/2013, should report, wit­hout delay, to the Euro­pean Cen­tral Bank any infor­ma­ti­on iden­ti­fi­ed in the cour­se of their mar­ket sur­veil­lan­ce acti­vi­ties that may be of poten­ti­al inte­rest for the pru­den­ti­al super­vi­so­ry tasks of the Euro­pean Cen­tral Bank spe­ci­fi­ed in that Regulation.

8. For high-risk AI systems listed in point 1 of Annex III to this Regu­la­ti­on, in so far as the systems are used for law enforce­ment pur­po­ses, bor­der manage­ment and justi­ce and demo­cra­cy, and for high-risk AI systems listed in points 6, 7 and 8 of Annex III to this Regu­la­ti­on, Mem­ber Sta­tes shall desi­gna­te as mar­ket sur­veil­lan­ce aut­ho­ri­ties for the pur­po­ses of this Regu­la­ti­on eit­her the com­pe­tent data pro­tec­tion super­vi­so­ry aut­ho­ri­ties under Regu­la­ti­on (EU) 2016/679 or Direc­ti­ve (EU) 2016/680, or any other aut­ho­ri­ty desi­gna­ted pur­su­ant to the same con­di­ti­ons laid down in Artic­les 41 to 44 of Direc­ti­ve (EU) 2016/680. Mar­ket sur­veil­lan­ce acti­vi­ties shall in no way affect the inde­pen­dence of judi­cial aut­ho­ri­ties, or other­wi­se inter­fe­re with their acti­vi­ties when acting in their judi­cial capacity.

(159) Each mar­ket sur­veil­lan­ce aut­ho­ri­ty for high-risk AI systems in the area of bio­me­trics, as listed in an annex to this Regu­la­ti­on inso­far as tho­se systems are used for the pur­po­ses of law enforce­ment, migra­ti­on, asyl­um and bor­der con­trol manage­ment, or the admi­ni­stra­ti­on of justi­ce and demo­cra­tic pro­ce­s­ses, should have effec­ti­ve inve­sti­ga­ti­ve and cor­rec­ti­ve powers, inclu­ding at least the power to obtain access to all per­so­nal data that are being pro­ce­s­sed and to all infor­ma­ti­on neces­sa­ry for the per­for­mance of its tasks. The mar­ket sur­veil­lan­ce aut­ho­ri­ties should be able to exer­cise their powers by acting with com­ple­te inde­pen­dence. Any limi­ta­ti­ons of their access to sen­si­ti­ve ope­ra­tio­nal data under this Regu­la­ti­on should be wit­hout pre­ju­di­ce to the powers con­fer­red to them by Direc­ti­ve (EU) 2016/680. No exclu­si­on on dis­clo­sing data to natio­nal data pro­tec­tion aut­ho­ri­ties under this Regu­la­ti­on should affect the cur­rent or future powers of tho­se aut­ho­ri­ties bey­ond the scope of this Regulation.

9. Whe­re Uni­on insti­tu­ti­ons, bodies, offices or agen­ci­es fall within the scope of this Regu­la­ti­on, the Euro­pean Data Pro­tec­tion Super­vi­sor shall act as their mar­ket sur­veil­lan­ce aut­ho­ri­ty, except in rela­ti­on to the Court of Justi­ce of the Euro­pean Uni­on acting in its judi­cial capacity.

10. Mem­ber Sta­tes shall faci­li­ta­te coor­di­na­ti­on bet­ween mar­ket sur­veil­lan­ce aut­ho­ri­ties desi­gna­ted under this Regu­la­ti­on and other rele­vant natio­nal aut­ho­ri­ties or bodies which super­vi­se the appli­ca­ti­on of Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Annex I, or in other Uni­on law, that might be rele­vant for the high-risk AI systems refer­red to in Annex III.

11. Mar­ket sur­veil­lan­ce aut­ho­ri­ties and the Com­mis­si­on shall be able to pro­po­se joint acti­vi­ties, inclu­ding joint inve­sti­ga­ti­ons, to be con­duc­ted by eit­her mar­ket sur­veil­lan­ce aut­ho­ri­ties or mar­ket sur­veil­lan­ce aut­ho­ri­ties joint­ly with the Com­mis­si­on, that have the aim of pro­mo­ting com­pli­ance, iden­ti­fy­ing non-com­pli­ance, rai­sing awa­re­ness or pro­vi­ding gui­dance in rela­ti­on to this Regu­la­ti­on with respect to spe­ci­fic cate­go­ries of high-risk AI systems that are found to pre­sent a serious risk across two or more Mem­ber Sta­tes in accordance with Artic­le 9 of Regu­la­ti­on (EU) 2019/1020. The AI Office shall pro­vi­de coor­di­na­ti­on sup­port for joint investigations.

(160) The mar­ket sur­veil­lan­ce aut­ho­ri­ties and the Com­mis­si­on should be able to pro­po­se joint acti­vi­ties, inclu­ding joint inve­sti­ga­ti­ons, to be con­duc­ted by mar­ket sur­veil­lan­ce aut­ho­ri­ties or mar­ket sur­veil­lan­ce aut­ho­ri­ties joint­ly with the Com­mis­si­on, that have the aim of pro­mo­ting com­pli­ance, iden­ti­fy­ing non-com­pli­ance, rai­sing awa­re­ness and pro­vi­ding gui­dance in rela­ti­on to this Regu­la­ti­on with respect to spe­ci­fic cate­go­ries of high-risk AI systems that are found to pre­sent a serious risk across two or more Mem­ber Sta­tes. Joint acti­vi­ties to pro­mo­te com­pli­ance should be car­ri­ed out in accordance with Artic­le 9 of Regu­la­ti­on (EU) 2019/1020. The AI Office should pro­vi­de coor­di­na­ti­on sup­port for joint investigations.

12. Wit­hout pre­ju­di­ce to the powers pro­vi­ded for under Regu­la­ti­on (EU) 2019/1020, and whe­re rele­vant and limi­t­ed to what is neces­sa­ry to ful­fil their tasks, the mar­ket sur­veil­lan­ce aut­ho­ri­ties shall be gran­ted full access by pro­vi­ders to the docu­men­ta­ti­on as well as the trai­ning, vali­da­ti­on and test­ing data sets used for the deve­lo­p­ment of high-risk AI systems, inclu­ding, whe­re appro­pria­te and sub­ject to secu­ri­ty safe­guards, through appli­ca­ti­on pro­gramming inter­faces (API) or other rele­vant tech­ni­cal means and tools enab­ling remo­te access. 

13. Mar­ket sur­veil­lan­ce aut­ho­ri­ties shall be gran­ted access to the source code of the high-risk AI system upon a rea­so­ned request and only when both of the fol­lo­wing con­di­ti­ons are fulfilled:

(a) access to source code is neces­sa­ry to assess the con­for­mi­ty of a high-risk AI system with the requi­re­ments set out in Chap­ter III, Sec­tion 2; and,

(b) test­ing or audi­ting pro­ce­du­res and veri­fi­ca­ti­ons based on the data and docu­men­ta­ti­on pro­vi­ded by the pro­vi­der have been exhau­sted or pro­ved insufficient.

14. Any infor­ma­ti­on or docu­men­ta­ti­on obtai­ned by mar­ket sur­veil­lan­ce aut­ho­ri­ties shall be trea­ted in accordance with the con­fi­den­tia­li­ty obli­ga­ti­ons set out in Artic­le 78.

Artic­le 75 Mutu­al assi­stance, mar­ket sur­veil­lan­ce and con­trol of gene­ral-pur­po­se AI systems

1. Whe­re an AI system is based on a gene­ral-pur­po­se AI model, and the model and the system are deve­lo­ped by the same pro­vi­der, the AI Office shall have powers to moni­tor and super­vi­se com­pli­ance of that AI system with obli­ga­ti­ons under this Regu­la­ti­on. To car­ry out its moni­to­ring and super­vi­si­on tasks, the AI Office shall have all the powers of a mar­ket sur­veil­lan­ce aut­ho­ri­ty pro­vi­ded for in this Sec­tion and Regu­la­ti­on (EU) 2019/1020.

(162) To make best use of the cen­tra­li­sed Uni­on exper­ti­se and syn­er­gies at Uni­on level, the powers of super­vi­si­on and enforce­ment of the obli­ga­ti­ons on pro­vi­ders of gene­ral-pur­po­se AI models should be a com­pe­tence of the Com­mis­si­on. The AI Office should be able to car­ry out all neces­sa­ry actions to moni­tor the effec­ti­ve imple­men­ta­ti­on of this Regu­la­ti­on as regards gene­ral-pur­po­se AI models. It should be able to inve­sti­ga­te pos­si­ble inf­rin­ge­ments of the rules on pro­vi­ders of gene­ral-pur­po­se AI models both on its own initia­ti­ve, fol­lo­wing the results of its moni­to­ring acti­vi­ties, or upon request from mar­ket sur­veil­lan­ce aut­ho­ri­ties in line with the con­di­ti­ons set out in this Regu­la­ti­on. To sup­port effec­ti­ve moni­to­ring of the AI Office, it should pro­vi­de for the pos­si­bi­li­ty that down­stream pro­vi­ders lodge com­plaints about pos­si­ble inf­rin­ge­ments of the rules on pro­vi­ders of gene­ral-pur­po­se AI models and systems. 

(164) The AI Office should be able to take the neces­sa­ry actions to moni­tor the effec­ti­ve imple­men­ta­ti­on of and com­pli­ance with the obli­ga­ti­ons for pro­vi­ders of gene­ral-pur­po­se AI models laid down in this Regu­la­ti­on. The AI Office should be able to inve­sti­ga­te pos­si­ble inf­rin­ge­ments in accordance with the powers pro­vi­ded for in this Regu­la­ti­on, inclu­ding by reque­st­ing docu­men­ta­ti­on and infor­ma­ti­on, by con­duc­ting eva­lua­tions, as well as by reque­st­ing mea­su­res from pro­vi­ders of gene­ral-pur­po­se AI models. When con­duc­ting eva­lua­tions, in order to make use of inde­pen­dent exper­ti­se, the AI Office should be able to invol­ve inde­pen­dent experts to car­ry out the eva­lua­tions on its behalf. Com­pli­ance with the obli­ga­ti­ons should be enforceable, inter alia, through requests to take appro­pria­te mea­su­res, inclu­ding risk miti­ga­ti­on mea­su­res in the case of iden­ti­fi­ed syste­mic risks as well as rest­ric­ting the making available on the mar­ket, with­dra­wing or recal­ling the model. As a safe­guard, whe­re nee­ded bey­ond the pro­ce­du­ral rights pro­vi­ded for in this Regu­la­ti­on, pro­vi­ders of gene­ral-pur­po­se AI models should have the pro­ce­du­ral rights pro­vi­ded for in Artic­le 18 of Regu­la­ti­on (EU) 2019/1020, which should app­ly muta­tis mut­an­dis, wit­hout pre­ju­di­ce to more spe­ci­fic pro­ce­du­ral rights pro­vi­ded for by this Regulation.

2. Whe­re the rele­vant mar­ket sur­veil­lan­ce aut­ho­ri­ties have suf­fi­ci­ent rea­son to con­sider gene­ral-pur­po­se AI systems that can be used direct­ly by deployers for at least one pur­po­se that is clas­si­fi­ed as high-risk pur­su­ant to this Regu­la­ti­on to be non-com­pli­ant with the requi­re­ments laid down in this Regu­la­ti­on, they shall coope­ra­te with the AI Office to car­ry out com­pli­ance eva­lua­tions, and shall inform the Board and other mar­ket sur­veil­lan­ce aut­ho­ri­ties accordingly.

3. Whe­re a mar­ket sur­veil­lan­ce aut­ho­ri­ty is unable to con­clude its inve­sti­ga­ti­on of the high-risk AI system becau­se of its ina­bi­li­ty to access cer­tain infor­ma­ti­on rela­ted to the gene­ral-pur­po­se AI model despi­te having made all appro­pria­te efforts to obtain that infor­ma­ti­on, it may sub­mit a rea­so­ned request to the AI Office, by which access to that infor­ma­ti­on shall be enforced. In that case, the AI Office shall sup­p­ly to the appli­cant aut­ho­ri­ty wit­hout delay, and in any event within 30 days, any infor­ma­ti­on that the AI Office con­siders to be rele­vant in order to estab­lish whe­ther a high-risk AI system is non-com­pli­ant. Mar­ket sur­veil­lan­ce aut­ho­ri­ties shall safe­guard the con­fi­den­tia­li­ty of the infor­ma­ti­on that they obtain in accordance with Artic­le 78 of this Regu­la­ti­on. The pro­ce­du­re pro­vi­ded for in Chap­ter VI of Regu­la­ti­on (EU) 2019/1020 shall app­ly muta­tis mutandis.

(161) It is neces­sa­ry to cla­ri­fy the respon­si­bi­li­ties and com­pe­ten­ces at Uni­on and natio­nal level as regards AI systems that are built on gene­ral-pur­po­se AI models. To avo­id over­lap­ping com­pe­ten­ces, whe­re an AI system is based on a gene­ral-pur­po­se AI model and the model and system are pro­vi­ded by the same pro­vi­der, the super­vi­si­on should take place at Uni­on level through the AI Office, which should have the powers of a mar­ket sur­veil­lan­ce aut­ho­ri­ty within the mea­ning of Regu­la­ti­on (EU) 2019/1020 for this pur­po­se. In all other cases, natio­nal mar­ket sur­veil­lan­ce aut­ho­ri­ties remain respon­si­ble for the super­vi­si­on of AI systems. Howe­ver, for gene­ral-pur­po­se AI systems that can be used direct­ly by deployers for at least one pur­po­se that is clas­si­fi­ed as high-risk, mar­ket sur­veil­lan­ce aut­ho­ri­ties should coope­ra­te with the AI Office to car­ry out eva­lua­tions of com­pli­ance and inform the Board and other mar­ket sur­veil­lan­ce aut­ho­ri­ties accor­din­gly. Fur­ther­mo­re, mar­ket sur­veil­lan­ce aut­ho­ri­ties should be able to request assi­stance from the AI Office whe­re the mar­ket sur­veil­lan­ce aut­ho­ri­ty is unable to con­clude an inve­sti­ga­ti­on on a high-risk AI system becau­se of its ina­bi­li­ty to access cer­tain infor­ma­ti­on rela­ted to the gene­ral-pur­po­se AI model on which the high-risk AI system is built. In such cases, the pro­ce­du­re regar­ding mutu­al assi­stance in cross-bor­der cases in Chap­ter VI of Regu­la­ti­on (EU) 2019/1020 should app­ly muta­tis mutandis.

Artic­le 76 Super­vi­si­on of test­ing in real world con­di­ti­ons by mar­ket sur­veil­lan­ce authorities

1. Mar­ket sur­veil­lan­ce aut­ho­ri­ties shall have com­pe­ten­ces and powers to ensu­re that test­ing in real world con­di­ti­ons is in accordance with this Regulation.

2. Whe­re test­ing in real world con­di­ti­ons is con­duc­ted for AI systems that are super­vi­sed within an AI regu­la­to­ry sand­box under Artic­le 58, the mar­ket sur­veil­lan­ce aut­ho­ri­ties shall veri­fy the com­pli­ance with Artic­le 60 as part of their super­vi­so­ry role for the AI regu­la­to­ry sand­box. Tho­se aut­ho­ri­ties may, as appro­pria­te, allow the test­ing in real world con­di­ti­ons to be con­duc­ted by the pro­vi­der or pro­s­pec­ti­ve pro­vi­der, in dero­ga­ti­on from the con­di­ti­ons set out in Artic­le 60(4), points (f) and (g).

3. Whe­re a mar­ket sur­veil­lan­ce aut­ho­ri­ty has been infor­med by the pro­s­pec­ti­ve pro­vi­der, the pro­vi­der or any third par­ty of a serious inci­dent or has other grounds for con­side­ring that the con­di­ti­ons set out in Artic­les 60 and 61 are not met, it may take eit­her of the fol­lo­wing decis­i­ons on its ter­ri­to­ry, as appropriate:

(a) to sus­pend or ter­mi­na­te the test­ing in real world conditions;

(b) to requi­re the pro­vi­der or pro­s­pec­ti­ve pro­vi­der and the deployer or pro­s­pec­ti­ve deployer to modi­fy any aspect of the test­ing in real world conditions.

4. Whe­re a mar­ket sur­veil­lan­ce aut­ho­ri­ty has taken a decis­i­on refer­red to in para­graph 3 of this Artic­le, or has issued an objec­tion within the mea­ning of Artic­le 60(4), point (b), the decis­i­on or the objec­tion shall indi­ca­te the grounds the­r­e­for and how the pro­vi­der or pro­s­pec­ti­ve pro­vi­der can chall­enge the decis­i­on or objection.

5. Whe­re appli­ca­ble, whe­re a mar­ket sur­veil­lan­ce aut­ho­ri­ty has taken a decis­i­on refer­red to in para­graph 3, it shall com­mu­ni­ca­te the grounds the­r­e­for to the mar­ket sur­veil­lan­ce aut­ho­ri­ties of other Mem­ber Sta­tes in which the AI system has been tested in accordance with the test­ing plan.

Artic­le 77 Powers of aut­ho­ri­ties pro­tec­ting fun­da­men­tal rights

1. Natio­nal public aut­ho­ri­ties or bodies which super­vi­se or enforce the respect of obli­ga­ti­ons under Uni­on law pro­tec­ting fun­da­men­tal rights, inclu­ding the right to non-dis­cri­mi­na­ti­on, in rela­ti­on to the use of high-risk AI systems refer­red to in Annex III shall have the power to request and access any docu­men­ta­ti­on crea­ted or main­tai­ned under this Regu­la­ti­on in acce­s­si­ble lan­guage and for­mat when access to that docu­men­ta­ti­on is neces­sa­ry for effec­tively ful­fil­ling their man­da­tes within the limits of their juris­dic­tion. The rele­vant public aut­ho­ri­ty or body shall inform the mar­ket sur­veil­lan­ce aut­ho­ri­ty of the Mem­ber Sta­te con­cer­ned of any such request.

2. By … [three months after the ent­ry into force of this Regu­la­ti­on], each Mem­ber Sta­te shall iden­ti­fy the public aut­ho­ri­ties or bodies refer­red to in para­graph 1 and make a list of them publicly available . Mem­ber Sta­tes shall noti­fy the list to the Com­mis­si­on and to the other Mem­ber Sta­tes, and shall keep the list up to date.

3. Whe­re the docu­men­ta­ti­on refer­red to in para­graph 1 is insuf­fi­ci­ent to ascer­tain whe­ther an inf­rin­ge­ment of obli­ga­ti­ons under Uni­on law pro­tec­ting fun­da­men­tal rights has occur­red, the public aut­ho­ri­ty or body refer­red to in para­graph 1 may make a rea­so­ned request to the mar­ket sur­veil­lan­ce aut­ho­ri­ty, to orga­ni­se test­ing of the high-risk AI system through tech­ni­cal means. The mar­ket sur­veil­lan­ce aut­ho­ri­ty shall orga­ni­se the test­ing with the clo­se invol­vement of the reque­st­ing public aut­ho­ri­ty or body within a rea­sonable time fol­lo­wing the request.

4. Any infor­ma­ti­on or docu­men­ta­ti­on obtai­ned by the natio­nal public aut­ho­ri­ties or bodies refer­red to in para­graph 1 of this Artic­le pur­su­ant to this Artic­le shall be trea­ted in accordance with the con­fi­den­tia­li­ty obli­ga­ti­ons set out in Artic­le 78. 

Artic­le 78 Confidentiality

The Com­mis­si­on, mar­ket sur­veil­lan­ce aut­ho­ri­ties and noti­fi­ed bodies and any other natu­ral or legal per­son invol­ved in the appli­ca­ti­on of this Regu­la­ti­on shall, in accordance with Uni­on or natio­nal law, respect the con­fi­den­tia­li­ty of infor­ma­ti­on and data obtai­ned in car­ry­ing out their tasks and acti­vi­ties in such a man­ner as to pro­tect, in particular:

(a) the intellec­tu­al pro­per­ty rights and con­fi­den­ti­al busi­ness infor­ma­ti­on or trade secrets of a natu­ral or legal per­son, inclu­ding source code, except in the cases refer­red to in Artic­le 5 of Direc­ti­ve (EU) 2016/943 of the Euro­pean Par­lia­ment and of the Council ;

(b) the effec­ti­ve imple­men­ta­ti­on of this Regu­la­ti­on, in par­ti­cu­lar for the pur­po­ses of inspec­tions, inve­sti­ga­ti­ons or audits; 

(c) public and natio­nal secu­ri­ty interests;

(d) the con­duct of cri­mi­nal or admi­ni­stra­ti­ve proceedings;

(e) infor­ma­ti­on clas­si­fi­ed pur­su­ant to Uni­on or natio­nal law.

2. The aut­ho­ri­ties invol­ved in the appli­ca­ti­on of this Regu­la­ti­on pur­su­ant to para­graph 1 shall request only data that is strict­ly neces­sa­ry for the assess­ment of the risk posed by AI systems and for the exer­cise of their powers in accordance with this Regu­la­ti­on and with Regu­la­ti­on (EU) 2019/1020. They shall put in place ade­qua­te and effec­ti­ve cyber­se­cu­ri­ty mea­su­res to pro­tect the secu­ri­ty and con­fi­den­tia­li­ty of the infor­ma­ti­on and data obtai­ned, and shall dele­te the data coll­ec­ted as soon as it is no lon­ger nee­ded for the pur­po­se for which it was obtai­ned, in accordance with appli­ca­ble Uni­on or natio­nal law. 

3. Wit­hout pre­ju­di­ce to para­graphs 1 and 2, infor­ma­ti­on exch­an­ged on a con­fi­den­ti­al basis bet­ween the natio­nal com­pe­tent aut­ho­ri­ties or bet­ween natio­nal com­pe­tent aut­ho­ri­ties and the Com­mis­si­on shall not be dis­c­lo­sed wit­hout pri­or con­sul­ta­ti­on of the ori­gi­na­ting natio­nal com­pe­tent aut­ho­ri­ty and the deployer when high-risk AI systems refer­red to in point 1, 6 or 7 of Annex III are used by law enforce­ment, bor­der con­trol, immi­gra­ti­on or asyl­um aut­ho­ri­ties and when such dis­clo­sure would jeo­par­di­se public and natio­nal secu­ri­ty inte­rests. This exch­an­ge of infor­ma­ti­on shall not cover sen­si­ti­ve ope­ra­tio­nal data in rela­ti­on to the acti­vi­ties of law enforce­ment, bor­der con­trol, immi­gra­ti­on or asyl­um authorities.

When the law enforce­ment, immi­gra­ti­on or asyl­um aut­ho­ri­ties are pro­vi­ders of high-risk AI systems refer­red to in point 1, 6 or 7 of Annex III, the tech­ni­cal docu­men­ta­ti­on refer­red to in Annex IV shall remain within the pre­mi­ses of tho­se aut­ho­ri­ties. Tho­se aut­ho­ri­ties shall ensu­re that the mar­ket sur­veil­lan­ce aut­ho­ri­ties refer­red to in Artic­le 74(8) and (9), as appli­ca­ble, can, upon request, imme­dia­te­ly access the docu­men­ta­ti­on or obtain a copy the­reof. Only staff of the mar­ket sur­veil­lan­ce aut­ho­ri­ty hol­ding the appro­pria­te level of secu­ri­ty cle­ar­ance shall be allo­wed to access that docu­men­ta­ti­on or any copy thereof.

4. Para­graphs 1, 2 and 3 shall not affect the rights or obli­ga­ti­ons of the Com­mis­si­on, Mem­ber Sta­tes and their rele­vant aut­ho­ri­ties, as well as tho­se of noti­fi­ed bodies, with regard to the exch­an­ge of infor­ma­ti­on and the dis­se­mi­na­ti­on of war­nings, inclu­ding in the con­text of cross-bor­der coope­ra­ti­on, nor shall they affect the obli­ga­ti­ons of the par­ties con­cer­ned to pro­vi­de infor­ma­ti­on under cri­mi­nal law of the Mem­ber States.

5. The Com­mis­si­on and Mem­ber Sta­tes may exch­an­ge, whe­re neces­sa­ry and in accordance with rele­vant pro­vi­si­ons of inter­na­tio­nal and trade agree­ments, con­fi­den­ti­al infor­ma­ti­on with regu­la­to­ry aut­ho­ri­ties of third count­ries with which they have con­clu­ded bila­te­ral or mul­ti­la­te­ral con­fi­den­tia­li­ty arran­ge­ments gua­ran­te­e­ing an ade­qua­te level of confidentiality.

(167) In order to ensu­re trustful and cons­truc­ti­ve coope­ra­ti­on of com­pe­tent aut­ho­ri­ties on Uni­on and natio­nal level, all par­ties invol­ved in the appli­ca­ti­on of this Regu­la­ti­on should respect the con­fi­den­tia­li­ty of infor­ma­ti­on and data obtai­ned in car­ry­ing out their tasks, in accordance with Uni­on or natio­nal law. They should car­ry out their tasks and acti­vi­ties in such a man­ner as to pro­tect, in par­ti­cu­lar, intellec­tu­al pro­per­ty rights, con­fi­den­ti­al busi­ness infor­ma­ti­on and trade secrets, the effec­ti­ve imple­men­ta­ti­on of this Regu­la­ti­on, public and natio­nal secu­ri­ty inte­rests, the inte­gri­ty of cri­mi­nal and admi­ni­stra­ti­ve pro­ce­e­dings, and the inte­gri­ty of clas­si­fi­ed information.

Artic­le 79 Pro­ce­du­re at natio­nal level for deal­ing with AI systems pre­sen­ting a risk

1. AI systems pre­sen­ting a risk shall be under­s­tood as a “pro­duct pre­sen­ting a risk” as defi­ned in Artic­le 3, point 19 of Regu­la­ti­on (EU) 2019/1020, in so far as they pre­sent risks to the health or safe­ty, or to fun­da­men­tal rights, of persons. 

2. Whe­re the mar­ket sur­veil­lan­ce aut­ho­ri­ty of a Mem­ber Sta­te has suf­fi­ci­ent rea­son to con­sider an AI system to pre­sent a risk as refer­red to in para­graph 1 of this Artic­le, it shall car­ry out an eva­lua­ti­on of the AI system con­cer­ned in respect of its com­pli­ance with all the requi­re­ments and obli­ga­ti­ons laid down in this Regu­la­ti­on. Par­ti­cu­lar atten­ti­on shall be given to AI systems pre­sen­ting a risk to vul­nerable groups. Whe­re risks to fun­da­men­tal rights are iden­ti­fi­ed, the mar­ket sur­veil­lan­ce aut­ho­ri­ty shall also inform and ful­ly coope­ra­te with the rele­vant natio­nal public aut­ho­ri­ties or bodies refer­red to in Artic­le 77(1). The rele­vant ope­ra­tors shall coope­ra­te as neces­sa­ry with the mar­ket sur­veil­lan­ce aut­ho­ri­ty and with the other natio­nal public aut­ho­ri­ties or bodies refer­red to in Artic­le 77(1).

Whe­re, in the cour­se of that eva­lua­ti­on, the mar­ket sur­veil­lan­ce aut­ho­ri­ty or, whe­re appli­ca­ble the mar­ket sur­veil­lan­ce aut­ho­ri­ty in coope­ra­ti­on with the natio­nal public aut­ho­ri­ty refer­red to in Artic­le 77(1), finds that the AI system does not com­ply with the requi­re­ments and obli­ga­ti­ons laid down in this Regu­la­ti­on, it shall wit­hout undue delay requi­re the rele­vant ope­ra­tor to take all appro­pria­te cor­rec­ti­ve actions to bring the AI system into com­pli­ance, to with­draw the AI system from the mar­ket, or to recall it within a peri­od the mar­ket sur­veil­lan­ce aut­ho­ri­ty may pre­scri­be, and in any event within the shorter of 15 working days, or as pro­vi­ded for in the rele­vant Uni­on har­mo­ni­sa­ti­on legislation.

The mar­ket sur­veil­lan­ce aut­ho­ri­ty shall inform the rele­vant noti­fi­ed body accor­din­gly. Artic­le 18 of Regu­la­ti­on (EU) 2019/1020 shall app­ly to the mea­su­res refer­red to in the second sub­pa­ra­graph of this paragraph.

3. Whe­re the mar­ket sur­veil­lan­ce aut­ho­ri­ty con­siders that the non-com­pli­ance is not rest­ric­ted to its natio­nal ter­ri­to­ry, it shall inform the Com­mis­si­on and the other Mem­ber Sta­tes wit­hout undue delay of the results of the eva­lua­ti­on and of the actions which it has requi­red the ope­ra­tor to take.

4. The ope­ra­tor shall ensu­re that all appro­pria­te cor­rec­ti­ve action is taken in respect of all the AI systems con­cer­ned that it has made available on the Uni­on market.

5. Whe­re the ope­ra­tor of an AI system does not take ade­qua­te cor­rec­ti­ve action within the peri­od refer­red to in para­graph 2, the mar­ket sur­veil­lan­ce aut­ho­ri­ty shall take all appro­pria­te pro­vi­sio­nal mea­su­res to pro­hi­bit or rest­rict the AI system’s being made available on its natio­nal mar­ket or put into ser­vice, to with­draw the pro­duct or the stan­da­lo­ne AI system from that mar­ket or to recall it. That aut­ho­ri­ty shall wit­hout undue delay noti­fy the Com­mis­si­on and the other Mem­ber Sta­tes of tho­se measures.

6. The noti­fi­ca­ti­on refer­red to in para­graph 5 shall include all available details, in par­ti­cu­lar the infor­ma­ti­on neces­sa­ry for the iden­ti­fi­ca­ti­on of the non-com­pli­ant AI system, the ori­gin of the AI system and the sup­p­ly chain, the natu­re of the non-com­pli­ance alle­ged and the risk invol­ved, the natu­re and dura­ti­on of the natio­nal mea­su­res taken and the argu­ments put for­ward by the rele­vant ope­ra­tor. In par­ti­cu­lar, the mar­ket sur­veil­lan­ce aut­ho­ri­ties shall indi­ca­te whe­ther the non-com­pli­ance is due to one or more of the following:

(a) non-com­pli­ance with the pro­hi­bi­ti­on of the AI prac­ti­ces refer­red to in Artic­le 5;

(b) a fail­ure of a high-risk AI system to meet requi­re­ments set out in Chap­ter III, Sec­tion 2;

(c) short­co­mings in the har­mo­ni­s­ed stan­dards or com­mon spe­ci­fi­ca­ti­ons refer­red to in Artic­les 40 and 41 con­fer­ring a pre­sump­ti­on of conformity;

(d) non-com­pli­ance with Artic­le 50.

7. The mar­ket sur­veil­lan­ce aut­ho­ri­ties other than the mar­ket sur­veil­lan­ce aut­ho­ri­ty of the Mem­ber Sta­te initia­ting the pro­ce­du­re shall, wit­hout undue delay, inform the Com­mis­si­on and the other Mem­ber Sta­tes of any mea­su­res adopted and of any addi­tio­nal infor­ma­ti­on at their dis­po­sal rela­ting to the non-com­pli­ance of the AI system con­cer­ned, and, in the event of dis­agree­ment with the noti­fi­ed natio­nal mea­su­re, of their objections.

8. Whe­re, within three months of rece­ipt of the noti­fi­ca­ti­on refer­red to in para­graph 5 of this Artic­le, no objec­tion has been rai­sed by eit­her a mar­ket sur­veil­lan­ce aut­ho­ri­ty of a Mem­ber Sta­te or by the Com­mis­si­on in respect of a pro­vi­sio­nal mea­su­re taken by a mar­ket sur­veil­lan­ce aut­ho­ri­ty of ano­ther Mem­ber Sta­te, that mea­su­re shall be dee­med justi­fi­ed. This shall be wit­hout pre­ju­di­ce to the pro­ce­du­ral rights of the con­cer­ned ope­ra­tor in accordance with Artic­le 18 of Regu­la­ti­on (EU) 2019/1020. The three-month peri­od refer­red to in this para­graph shall be redu­ced to 30 days in the event of non-com­pli­ance with the pro­hi­bi­ti­on of the AI prac­ti­ces refer­red to in Artic­le 5 of this Regulation.

9. The mar­ket sur­veil­lan­ce aut­ho­ri­ties shall ensu­re that appro­pria­te rest­ric­ti­ve mea­su­res are taken in respect of the pro­duct or the AI system con­cer­ned, such as with­dra­wal of the pro­duct or the AI system from their mar­ket, wit­hout undue delay.

Artic­le 80 Pro­ce­du­re for deal­ing with AI systems clas­si­fi­ed by the pro­vi­der as non-high-risk in appli­ca­ti­on of Annex III

1. Whe­re a mar­ket sur­veil­lan­ce aut­ho­ri­ty has suf­fi­ci­ent rea­son to con­sider that an AI system clas­si­fi­ed by the pro­vi­der as non-high-risk pur­su­ant to Artic­le 6(3) is inde­ed high-risk, the mar­ket sur­veil­lan­ce aut­ho­ri­ty shall car­ry out an eva­lua­ti­on of the AI system con­cer­ned in respect of its clas­si­fi­ca­ti­on as a high-risk AI system based on the con­di­ti­ons set out in Artic­le 6(3) and the Com­mis­si­on guidelines.

2. Whe­re, in the cour­se of that eva­lua­ti­on, the mar­ket sur­veil­lan­ce aut­ho­ri­ty finds that the AI system con­cer­ned is high-risk, it shall wit­hout undue delay requi­re the rele­vant pro­vi­der to take all neces­sa­ry actions to bring the AI system into com­pli­ance with the requi­re­ments and obli­ga­ti­ons laid down in this Regu­la­ti­on, as well as take appro­pria­te cor­rec­ti­ve action within a peri­od the mar­ket sur­veil­lan­ce aut­ho­ri­ty may prescribe.

3. Whe­re the mar­ket sur­veil­lan­ce aut­ho­ri­ty con­siders that the use of the AI system con­cer­ned is not rest­ric­ted to its natio­nal ter­ri­to­ry, it shall inform the Com­mis­si­on and the other Mem­ber Sta­tes wit­hout undue delay of the results of the eva­lua­ti­on and of the actions which it has requi­red the pro­vi­der to take.

4. The pro­vi­der shall ensu­re that all neces­sa­ry action is taken to bring the AI system into com­pli­ance with the requi­re­ments and obli­ga­ti­ons laid down in this Regu­la­ti­on. Whe­re the pro­vi­der of an AI system con­cer­ned does not bring the AI system into com­pli­ance with tho­se requi­re­ments and obli­ga­ti­ons within the peri­od refer­red to in para­graph 2 of this Artic­le, the pro­vi­der shall be sub­ject to fines in accordance with Artic­le 99.

5. The pro­vi­der shall ensu­re that all appro­pria­te cor­rec­ti­ve action is taken in respect of all the AI systems con­cer­ned that it has made available on the Uni­on market.

6. Whe­re the pro­vi­der of the AI system con­cer­ned does not take ade­qua­te cor­rec­ti­ve action within the peri­od refer­red to in para­graph 2 of this Artic­le, Artic­le 79(5) to (9) shall apply.

7. Whe­re, in the cour­se of the eva­lua­ti­on pur­su­ant to para­graph 1 of this Artic­le, the mar­ket sur­veil­lan­ce aut­ho­ri­ty estab­lishes that the AI system was mis­clas­si­fi­ed by the pro­vi­der as non-high-risk in order to cir­cum­vent the appli­ca­ti­on of requi­re­ments in Chap­ter III, Sec­tion 2, the pro­vi­der shall be sub­ject to fines in accordance with Artic­le 99. 

In exer­cis­ing their power to moni­tor the appli­ca­ti­on of this Artic­le, and in accordance with Artic­le 11 of Regu­la­ti­on (EU) 2019/1020, mar­ket sur­veil­lan­ce aut­ho­ri­ties may per­form appro­pria­te checks, taking into account in par­ti­cu­lar infor­ma­ti­on stored in the EU data­ba­se refer­red to in Artic­le 71 of this Regulation.

Artic­le 81 Uni­on safe­guard procedure

1. Whe­re, within three months of rece­ipt of the noti­fi­ca­ti­on refer­red to in Artic­le 79(5), or within 30 days in the case of non-com­pli­ance with the pro­hi­bi­ti­on of the AI prac­ti­ces refer­red to in Artic­le 5, objec­tions are rai­sed by the mar­ket sur­veil­lan­ce aut­ho­ri­ty of a Mem­ber Sta­te to a mea­su­re taken by ano­ther mar­ket sur­veil­lan­ce aut­ho­ri­ty, or whe­re the Com­mis­si­on con­siders the mea­su­re to be con­tra­ry to Uni­on law, the Com­mis­si­on shall wit­hout undue delay enter into con­sul­ta­ti­on with the mar­ket sur­veil­lan­ce aut­ho­ri­ty of the rele­vant Mem­ber Sta­te and the ope­ra­tor or ope­ra­tors, and shall eva­lua­te the natio­nal mea­su­re. On the basis of the results of that eva­lua­ti­on, the Com­mis­si­on shall, within six months, or within 60 days in the case of non-com­pli­ance with the pro­hi­bi­ti­on of the AI prac­ti­ces refer­red to in Artic­le 5, start­ing from the noti­fi­ca­ti­on refer­red to in Artic­le 79(5), deci­de whe­ther the natio­nal mea­su­re is justi­fi­ed and shall noti­fy its decis­i­on to the mar­ket sur­veil­lan­ce aut­ho­ri­ty of the Mem­ber Sta­te con­cer­ned. The Com­mis­si­on shall also inform all other mar­ket sur­veil­lan­ce aut­ho­ri­ties of its decision.

2. Whe­re the Com­mis­si­on con­siders the mea­su­re taken by the rele­vant Mem­ber Sta­te to be justi­fi­ed, all Mem­ber Sta­tes shall ensu­re that they take appro­pria­te rest­ric­ti­ve mea­su­res in respect of the AI system con­cer­ned, such as requi­ring the with­dra­wal of the AI system from their mar­ket wit­hout undue delay, and shall inform the Com­mis­si­on accor­din­gly. Whe­re the Com­mis­si­on con­siders the natio­nal mea­su­re to be unju­sti­fi­ed, the Mem­ber Sta­te con­cer­ned shall with­draw the mea­su­re and shall inform the Com­mis­si­on accordingly.

3. Whe­re the natio­nal mea­su­re is con­side­red justi­fi­ed and the non-com­pli­ance of the AI system is attri­bu­ted to short­co­mings in the har­mo­ni­s­ed stan­dards or com­mon spe­ci­fi­ca­ti­ons refer­red to in Artic­les 40 and 41 of this Regu­la­ti­on, the Com­mis­si­on shall app­ly the pro­ce­du­re pro­vi­ded for in Artic­le 11 of Regu­la­ti­on (EU) No 1025/2012.

Artic­le 82 Com­pli­ant AI systems which pre­sent a risk

1. Whe­re, having per­for­med an eva­lua­ti­on under Artic­le 79, after con­sul­ting the rele­vant natio­nal public aut­ho­ri­ty refer­red to in Artic­le 77(1), the mar­ket sur­veil­lan­ce aut­ho­ri­ty of a Mem­ber Sta­te finds that alt­hough a high-risk AI system com­plies with this Regu­la­ti­on, it nevert­hel­ess pres­ents a risk to the health or safe­ty of per­sons, to fun­da­men­tal rights, or to other aspects of public inte­rest pro­tec­tion, it shall requi­re the rele­vant ope­ra­tor to take all appro­pria­te mea­su­res to ensu­re that the AI system con­cer­ned, when pla­ced on the mar­ket or put into ser­vice, no lon­ger pres­ents that risk wit­hout undue delay, within a peri­od it may prescribe.

2. The pro­vi­der or other rele­vant ope­ra­tor shall ensu­re that cor­rec­ti­ve action is taken in respect of all the AI systems con­cer­ned that it has made available on the Uni­on mar­ket within the time­line pre­scri­bed by the mar­ket sur­veil­lan­ce aut­ho­ri­ty of the Mem­ber Sta­te refer­red to in para­graph 1.

3. The Mem­ber Sta­tes shall imme­dia­te­ly inform the Com­mis­si­on and the other Mem­ber Sta­tes of a fin­ding under para­graph 1. That infor­ma­ti­on shall include all available details, in par­ti­cu­lar the data neces­sa­ry for the iden­ti­fi­ca­ti­on of the AI system con­cer­ned, the ori­gin and the sup­p­ly chain of the AI system, the natu­re of the risk invol­ved and the natu­re and dura­ti­on of the natio­nal mea­su­res taken.

4. The Com­mis­si­on shall wit­hout undue delay enter into con­sul­ta­ti­on with the Mem­ber Sta­tes con­cer­ned and the rele­vant ope­ra­tors, and shall eva­lua­te the natio­nal mea­su­res taken. On the basis of the results of that eva­lua­ti­on, the Com­mis­si­on shall deci­de whe­ther the mea­su­re is justi­fi­ed and, whe­re neces­sa­ry, pro­po­se other appro­pria­te measures. 

The Com­mis­si­on shall imme­dia­te­ly com­mu­ni­ca­te its decis­i­on to the Mem­ber Sta­tes con­cer­ned and to the rele­vant ope­ra­tors. It shall also inform the other Mem­ber States.

Artic­le 83 For­mal non-compliance

1. Whe­re the mar­ket sur­veil­lan­ce aut­ho­ri­ty of a Mem­ber Sta­te makes one of the fol­lo­wing fin­dings, it shall requi­re the rele­vant pro­vi­der to put an end to the non-com­pli­ance con­cer­ned, within a peri­od it may prescribe:

(a) the CE mar­king has been affi­xed in vio­la­ti­on of Artic­le 48;

(b) the CE mar­king has not been affixed;

(c) the EU decla­ra­ti­on of con­for­mi­ty refer­red to in Artic­le 47 has not been drawn up;

(d) the EU decla­ra­ti­on of con­for­mi­ty refer­red to in Artic­le 47 has not been drawn up correctly;

(e) the regi­stra­ti­on in the EU data­ba­se refer­red to in Artic­le 71 has not been car­ri­ed out;

(f) whe­re appli­ca­ble, no aut­ho­ri­sed repre­sen­ta­ti­ve has been appointed;

(g) tech­ni­cal docu­men­ta­ti­on is not available.

2. Whe­re the non-com­pli­ance refer­red to in para­graph 1 per­sists, the mar­ket sur­veil­lan­ce aut­ho­ri­ty of the Mem­ber Sta­te con­cer­ned shall take appro­pria­te and pro­por­tio­na­te mea­su­res to rest­rict or pro­hi­bit the high-risk AI system being made available on the mar­ket or to ensu­re that it is recal­led or with­drawn from the mar­ket wit­hout delay.

Artic­le 84 Uni­on AI test­ing sup­port structures

1. The Com­mis­si­on shall desi­gna­te one or more Uni­on AI test­ing sup­port struc­tures to per­form the tasks listed under Artic­le 21(6) of Regu­la­ti­on (EU) 2019/1020 in the area of AI.

2. Wit­hout pre­ju­di­ce to the tasks refer­red to in para­graph 1, Uni­on AI test­ing sup­port struc­tures shall also pro­vi­de inde­pen­dent tech­ni­cal or sci­en­ti­fic advice at the request of the Board, the Com­mis­si­on, or of mar­ket sur­veil­lan­ce authorities.

(152) In order to sup­port ade­qua­te enforce­ment as regards AI systems and rein­force the capa­ci­ties of the Mem­ber Sta­tes, Uni­on AI test­ing sup­port struc­tures should be estab­lished and made available to the Mem­ber States.

Sec­tion 4 Remedies

Artic­le 85 Right to lodge a com­plaint with a mar­ket sur­veil­lan­ce authority

Wit­hout pre­ju­di­ce to other admi­ni­stra­ti­ve or judi­cial reme­dies, any natu­ral or legal per­son having grounds to con­sider that the­re has been an inf­rin­ge­ment of the pro­vi­si­ons of this Regu­la­ti­on may sub­mit com­plaints to the rele­vant mar­ket sur­veil­lan­ce authority.

In accordance with Regu­la­ti­on (EU) 2019/1020, such com­plaints shall be taken into account for the pur­po­se of con­duc­ting mar­ket sur­veil­lan­ce acti­vi­ties, and shall be hand­led in line with the dedi­ca­ted pro­ce­du­res estab­lished the­r­e­for by the mar­ket sur­veil­lan­ce authorities.

(170) Uni­on and natio­nal law alre­a­dy pro­vi­de effec­ti­ve reme­dies to natu­ral and legal per­sons who­se rights and free­doms are adver­se­ly affec­ted by the use of AI systems. Wit­hout pre­ju­di­ce to tho­se reme­dies, any natu­ral or legal per­son that has grounds to con­sider that the­re has been an inf­rin­ge­ment of this Regu­la­ti­on should be entit­led to lodge a com­plaint to the rele­vant mar­ket sur­veil­lan­ce authority. 

Artic­le 86 Right to expl­ana­ti­on of indi­vi­du­al decision-making

1. Any affec­ted per­son sub­ject to a decis­i­on which is taken by the deployer on the basis of the out­put from a high-risk AI system listed in Annex III, with the excep­ti­on of systems listed under point 2 the­reof, and which pro­du­ces legal effects or simi­lar­ly signi­fi­cant­ly affects that per­son in a way that they con­sider to have an adver­se impact on their health, safe­ty or fun­da­men­tal rights shall have the right to obtain from the deployer clear and meaningful expl­ana­ti­ons of the role of the AI system in the decis­i­on-making pro­ce­du­re and the main ele­ments of the decis­i­on taken.

2. Para­graph 1 shall not app­ly to the use of AI systems for which excep­ti­ons from, or rest­ric­tions to, the obli­ga­ti­on under that para­graph fol­low from Uni­on or natio­nal law in com­pli­ance with Uni­on law.

3. This Artic­le shall app­ly only to the ext­ent that the right refer­red to in para­graph 1 is not other­wi­se pro­vi­ded for under Uni­on law. 

(171) Affec­ted per­sons should have the right to obtain an expl­ana­ti­on whe­re a deployer’s decis­i­on is based main­ly upon the out­put from cer­tain high-risk AI systems that fall within the scope of this Regu­la­ti­on and whe­re that decis­i­on pro­du­ces legal effects or simi­lar­ly signi­fi­cant­ly affects tho­se per­sons in a way that they con­sider to have an adver­se impact on their health, safe­ty or fun­da­men­tal rights. That expl­ana­ti­on should be clear and meaningful and should pro­vi­de a basis on which the affec­ted per­sons are able to exer­cise their rights. The right to obtain an expl­ana­ti­on should not app­ly to the use of AI systems for which excep­ti­ons or rest­ric­tions fol­low from Uni­on or natio­nal law and should app­ly only to the ext­ent this right is not alre­a­dy pro­vi­ded for under Uni­on law. Per­sons acting as whist­le­b­lo­wers on the inf­rin­ge­ments of this Regu­la­ti­on should be pro­tec­ted under the Uni­on law. Direc­ti­ve (EU) 2019/1937 of the Euro­pean Par­lia­ment and of the Council54 should the­r­e­fo­re app­ly to the report­ing of inf­rin­ge­ments of this Regu­la­ti­on and the pro­tec­tion of per­sons report­ing such infringements.

Artic­le 87 Report­ing of inf­rin­ge­ments and pro­tec­tion of report­ing persons

Direc­ti­ve (EU) 2019/1937 shall app­ly to the report­ing of inf­rin­ge­ments of this Regu­la­ti­on and the pro­tec­tion of per­sons report­ing such infringements. 

(172) Per­sons acting as whist­le­b­lo­wers on the inf­rin­ge­ments of this Regu­la­ti­on should be pro­tec­ted under the Uni­on law. Direc­ti­ve (EU) 2019/1937 of the Euro­pean Par­lia­ment and of the Council54 should the­r­e­fo­re app­ly to the report­ing of inf­rin­ge­ments of this Regu­la­ti­on and the pro­tec­tion of per­sons report­ing such infringements.

Sec­tion 5 Super­vi­si­on Inve­sti­ga­ti­on Enforce­ment And Moni­to­ring In Respect Of Pro­vi­ders Of Gene­ral Pur­po­se AI Models

Artic­le 88 Enforce­ment of the obli­ga­ti­ons of pro­vi­ders of gene­ral-pur­po­se AI models

1. The Com­mis­si­on shall have exclu­si­ve powers to super­vi­se and enforce Chap­ter V, taking into account the pro­ce­du­ral gua­ran­tees under Artic­le 94. The Com­mis­si­on shall ent­rust the imple­men­ta­ti­on of the­se tasks to the AI Office, wit­hout pre­ju­di­ce to the powers of orga­ni­sa­ti­on of the Com­mis­si­on and the divi­si­on of com­pe­ten­ces bet­ween Mem­ber Sta­tes and the Uni­on based on the Treaties.

2. Wit­hout pre­ju­di­ce to Artic­le 75(3), mar­ket sur­veil­lan­ce aut­ho­ri­ties may request the Com­mis­si­on to exer­cise the powers laid down in this Sec­tion, whe­re that is neces­sa­ry and pro­por­tio­na­te to assist with the ful­film­ent of their tasks under this Regulation. 

Artic­le 89 Moni­to­ring actions

1. For the pur­po­se of car­ry­ing out the tasks assi­gned to it under this Sec­tion, the AI Office may take the neces­sa­ry actions to moni­tor the effec­ti­ve imple­men­ta­ti­on and com­pli­ance with this Regu­la­ti­on by pro­vi­ders of gene­ral-pur­po­se AI models, inclu­ding their adherence to appro­ved codes of practice.

2. Down­stream pro­vi­ders shall have the right to lodge a com­plaint alleging an inf­rin­ge­ment of this Regu­la­ti­on. A com­plaint shall be duly rea­so­ned and indi­ca­te at least:

(a) the point of cont­act of the pro­vi­der of the gene­ral-pur­po­se AI model concerned;

(b) a descrip­ti­on of the rele­vant facts, the pro­vi­si­ons of this Regu­la­ti­on con­cer­ned, and the rea­son why the down­stream pro­vi­der con­siders that the pro­vi­der of the general¬purpose AI model con­cer­ned inf­rin­ged this Regulation;

(c) any other infor­ma­ti­on that the down­stream pro­vi­der that sent the request con­siders rele­vant, inclu­ding, whe­re appro­pria­te, infor­ma­ti­on gathe­red on its own initiative.

Artic­le 90 Alerts of syste­mic risks by the sci­en­ti­fic panel

1. The sci­en­ti­fic panel may pro­vi­de a qua­li­fi­ed alert to the AI Office whe­re it has rea­son to suspect that:

(a) a gene­ral-pur­po­se AI model poses con­cre­te iden­ti­fia­ble risk at Uni­on level; or,

(b) a gene­ral-pur­po­se AI model meets the con­di­ti­ons refer­red to in Artic­le 51.

2. Upon such qua­li­fi­ed alert, the Com­mis­si­on, through the AI Office and after having infor­med the Board, may exer­cise the powers laid down in this Sec­tion for the pur­po­se of asses­sing the mat­ter. The AI Office shall inform the Board of any mea­su­re accor­ding to Artic­les 91 to 94.

3. A qua­li­fi­ed alert shall be duly rea­so­ned and indi­ca­te at least:

(a) the point of cont­act of the pro­vi­der of the gene­ral-pur­po­se AI model with syste­mic risk concerned; 

(b) a descrip­ti­on of the rele­vant facts and the rea­sons for the alert by the sci­en­ti­fic panel;

(c) any other infor­ma­ti­on that the sci­en­ti­fic panel con­siders to be rele­vant, inclu­ding, whe­re appro­pria­te, infor­ma­ti­on gathe­red on its own initiative.

Artic­le 91 Power to request docu­men­ta­ti­on and information

1. The Com­mis­si­on may request the pro­vi­der of the gene­ral-pur­po­se AI model con­cer­ned to pro­vi­de the docu­men­ta­ti­on drawn up by the pro­vi­der in accordance with Artic­les 53 and 55, or any addi­tio­nal infor­ma­ti­on that is neces­sa­ry for the pur­po­se of asses­sing com­pli­ance of the pro­vi­der with this Regulation.

2. Befo­re sen­ding the request for infor­ma­ti­on, the AI Office may initia­te a struc­tu­red dia­lo­gue with the pro­vi­der of the gene­ral-pur­po­se AI model.

3. Upon a duly sub­stan­tia­ted request from the sci­en­ti­fic panel, the Com­mis­si­on may issue a request for infor­ma­ti­on to a pro­vi­der of a gene­ral-pur­po­se AI model, whe­re the access to infor­ma­ti­on is neces­sa­ry and pro­por­tio­na­te for the ful­film­ent of the tasks of the sci­en­ti­fic panel under Artic­le 68(2).

4. The request for infor­ma­ti­on shall sta­te the legal basis and the pur­po­se of the request, spe­ci­fy what infor­ma­ti­on is requi­red, set a peri­od within which the infor­ma­ti­on is to be pro­vi­ded, and indi­ca­te the fines pro­vi­ded for in Artic­le 101 for sup­p­ly­ing incor­rect, incom­ple­te or mis­lea­ding information.

5. The pro­vi­der of the gene­ral-pur­po­se AI model con­cer­ned, or its repre­sen­ta­ti­ve shall sup­p­ly the infor­ma­ti­on reque­sted. In the case of legal per­sons, com­pa­nies or firms, or whe­re the pro­vi­der has no legal per­so­na­li­ty, the per­sons aut­ho­ri­sed to repre­sent them by law or by their sta­tu­tes, shall sup­p­ly the infor­ma­ti­on reque­sted on behalf of the pro­vi­der of the gene­ral-pur­po­se AI model con­cer­ned. Lawy­ers duly aut­ho­ri­sed to act may sup­p­ly infor­ma­ti­on on behalf of their cli­ents. The cli­ents shall nevert­hel­ess remain ful­ly respon­si­ble if the infor­ma­ti­on sup­plied is incom­ple­te, incor­rect or misleading.

Artic­le 92 Power to con­duct evaluations

1. The AI Office, after con­sul­ting the Board, may con­duct eva­lua­tions of the gene­ral-pur­po­se AI model concerned:

(a) to assess com­pli­ance of the pro­vi­der with obli­ga­ti­ons under this Regu­la­ti­on, whe­re the infor­ma­ti­on gathe­red pur­su­ant to Artic­le 91 is insuf­fi­ci­ent; or,

(b) to inve­sti­ga­te syste­mic risks at Uni­on level of gene­ral-pur­po­se AI models with syste­mic risk, in par­ti­cu­lar fol­lo­wing a qua­li­fi­ed alert from the sci­en­ti­fic panel in accordance with Artic­le 90(1), point (a).

2. The Com­mis­si­on may deci­de to appoint inde­pen­dent experts to car­ry out eva­lua­tions on its behalf, inclu­ding from the sci­en­ti­fic panel estab­lished pur­su­ant to Artic­le 68. Inde­pen­dent experts appoin­ted for this task shall meet the cri­te­ria out­lined in Artic­le 68(2).

3. For the pur­po­ses of para­graph 1, the Com­mis­si­on may request access to the gene­ral-pur­po­se AI model con­cer­ned through APIs or fur­ther appro­pria­te tech­ni­cal means and tools, inclu­ding source code.

4. The request for access shall sta­te the legal basis, the pur­po­se and rea­sons of the request and set the peri­od within which the access is to be pro­vi­ded, and the fines pro­vi­ded for in Artic­le 101 for fail­ure to pro­vi­de access.

5. The pro­vi­ders of the gene­ral-pur­po­se AI model con­cer­ned or its repre­sen­ta­ti­ve shall sup­p­ly the infor­ma­ti­on reque­sted. In the case of legal per­sons, com­pa­nies or firms, or whe­re the pro­vi­der has no legal per­so­na­li­ty, the per­sons aut­ho­ri­sed to repre­sent them by law or by their sta­tu­tes, shall pro­vi­de the access reque­sted on behalf of the pro­vi­der of the gene­ral-pur­po­se AI model concerned. 

6. The Com­mis­si­on shall adopt imple­men­ting acts set­ting out the detail­ed arran­ge­ments and the con­di­ti­ons for the eva­lua­tions, inclu­ding the detail­ed arran­ge­ments for invol­ving inde­pen­dent experts, and the pro­ce­du­re for the sel­ec­tion the­reof. Tho­se imple­men­ting acts shall be adopted in accordance with the exami­na­ti­on pro­ce­du­re refer­red to in Artic­le 98(2).

7. Pri­or to reque­st­ing access to the gene­ral-pur­po­se AI model con­cer­ned, the AI Office may initia­te a struc­tu­red dia­lo­gue with the pro­vi­der of the gene­ral-pur­po­se AI model to gather more infor­ma­ti­on on the inter­nal test­ing of the model, inter­nal safe­guards for pre­ven­ting syste­mic risks, and other inter­nal pro­ce­du­res and mea­su­res the pro­vi­der has taken to miti­ga­te such risks.

Artic­le 93 Power to request measures

1. Whe­re neces­sa­ry and appro­pria­te, the Com­mis­si­on may request pro­vi­ders to:

(a) take appro­pria­te mea­su­res to com­ply with the obli­ga­ti­ons set out in Artic­les 53 and 54;

(b) imple­ment miti­ga­ti­on mea­su­res, whe­re the eva­lua­ti­on car­ri­ed out in accordance with Artic­le 92 has given rise to serious and sub­stan­tia­ted con­cern of a syste­mic risk at Uni­on level;

(c) rest­rict the making available on the mar­ket, with­draw or recall the model.

2. Befo­re a mea­su­re is reque­sted, the AI Office may initia­te a struc­tu­red dia­lo­gue with the pro­vi­der of the gene­ral-pur­po­se AI model.

3. If, during the struc­tu­red dia­lo­gue refer­red to in para­graph 2, the pro­vi­der of the general¬purpose AI model with syste­mic risk offers com­mit­ments to imple­ment miti­ga­ti­on mea­su­res to address a syste­mic risk at Uni­on level, the Com­mis­si­on may, by decis­i­on, make tho­se com­mit­ments bin­ding and decla­re that the­re are no fur­ther grounds for action. 

Artic­le 94 Pro­ce­du­ral rights of eco­no­mic ope­ra­tors of the gene­ral-pur­po­se AI model

Artic­le 18 of Regu­la­ti­on (EU) 2019/1020 shall app­ly muta­tis mut­an­dis to the pro­vi­ders of the gene­ral-pur­po­se AI model, wit­hout pre­ju­di­ce to more spe­ci­fic pro­ce­du­ral rights pro­vi­ded for in this Regulation.

Chap­ter X Codes of con­duct and guidelines

Artic­le 95 Codes of con­duct for vol­un­t­a­ry appli­ca­ti­on of spe­ci­fic requirements

1. The AI Office and the Mem­ber Sta­tes shall encou­ra­ge and faci­li­ta­te the dra­wing up of codes of con­duct, inclu­ding rela­ted gover­nan­ce mecha­nisms, inten­ded to foster the vol­un­t­a­ry appli­ca­ti­on to AI systems, other than high-risk AI systems, of some or all of the requi­re­ments set out in Chap­ter III, Sec­tion 2 taking into account the available tech­ni­cal solu­ti­ons and indu­stry best prac­ti­ces allo­wing for the appli­ca­ti­on of such requirements.

2. The AI Office and the Mem­ber Sta­tes shall faci­li­ta­te the dra­wing up of codes of con­duct con­cer­ning the vol­un­t­a­ry appli­ca­ti­on, inclu­ding by deployers, of spe­ci­fic requi­re­ments to all AI systems, on the basis of clear objec­ti­ves and key per­for­mance indi­ca­tors to mea­su­re the achie­ve­ment of tho­se objec­ti­ves, inclu­ding ele­ments such as, but not limi­t­ed to:

(a) appli­ca­ble ele­ments pro­vi­ded for in Uni­on ethi­cal gui­de­lines for trust­wor­t­hy AI;

(b) asses­sing and mini­mi­sing the impact of AI systems on envi­ron­men­tal sus­taina­bi­li­ty, inclu­ding as regards ener­gy-effi­ci­ent pro­gramming and tech­ni­ques for the effi­ci­ent design, trai­ning and use of AI;

(c) pro­mo­ting AI liter­a­cy, in par­ti­cu­lar that of per­sons deal­ing with the deve­lo­p­ment, ope­ra­ti­on and use of AI;

(d) faci­li­ta­ting an inclu­si­ve and diver­se design of AI systems, inclu­ding through the estab­lish­ment of inclu­si­ve and diver­se deve­lo­p­ment teams and the pro­mo­ti­on of stake­hol­ders’ par­ti­ci­pa­ti­on in that process; 

(e) asses­sing and pre­ven­ting the nega­ti­ve impact of AI systems on vul­nerable per­sons or groups of vul­nerable per­sons, inclu­ding as regards acce­s­si­bi­li­ty for per­sons with a disa­bi­li­ty, as well as on gen­der equality.

3. Codes of con­duct may be drawn up by indi­vi­du­al pro­vi­ders or deployers of AI systems or by orga­ni­sa­ti­ons repre­sen­ting them or by both, inclu­ding with the invol­vement of any inte­re­sted stake­hol­ders and their repre­sen­ta­ti­ve orga­ni­sa­ti­ons, inclu­ding civil socie­ty orga­ni­sa­ti­ons and aca­de­mia. Codes of con­duct may cover one or more AI systems taking into account the simi­la­ri­ty of the inten­ded pur­po­se of the rele­vant systems.

4. The AI Office and the Mem­ber Sta­tes shall take into account the spe­ci­fic inte­rests and needs of SMEs, inclu­ding start-ups, when encou­ra­ging and faci­li­ta­ting the dra­wing up of codes of conduct.

(165) The deve­lo­p­ment of AI systems other than high-risk AI systems in accordance with the requi­re­ments of this Regu­la­ti­on may lead to a lar­ger upt­ake of ethi­cal and trust­wor­t­hy AI in the Uni­on. Pro­vi­ders of AI systems that are not high-risk should be encou­ra­ged to crea­te codes of con­duct, inclu­ding rela­ted gover­nan­ce mecha­nisms, inten­ded to foster the vol­un­t­a­ry appli­ca­ti­on of some or all of the man­da­to­ry requi­re­ments appli­ca­ble to high-risk AI systems, adapt­ed in light of the inten­ded pur­po­se of the systems and the lower risk invol­ved and taking into account the available tech­ni­cal solu­ti­ons and indu­stry best prac­ti­ces such as model and data cards. Pro­vi­ders and, as appro­pria­te, deployers of all AI systems, high-risk or not, and AI models should also be encou­ra­ged to app­ly on a vol­un­t­a­ry basis addi­tio­nal requi­re­ments rela­ted, for exam­p­le, to the ele­ments of the Union’s Ethics Gui­de­lines for Trust­wor­t­hy AI, envi­ron­men­tal sus­taina­bi­li­ty, AI liter­a­cy mea­su­res, inclu­si­ve and diver­se design and deve­lo­p­ment of AI systems, inclu­ding atten­ti­on to vul­nerable per­sons and acce­s­si­bi­li­ty to per­sons with disa­bi­li­ty, stake­hol­ders’ par­ti­ci­pa­ti­on with the invol­vement, as appro­pria­te, of rele­vant stake­hol­ders such as busi­ness and civil socie­ty orga­ni­sa­ti­ons, aca­de­mia, rese­arch orga­ni­sa­ti­ons, trade uni­ons and con­su­mer pro­tec­tion orga­ni­sa­ti­ons in the design and deve­lo­p­ment of AI systems, and diver­si­ty of the deve­lo­p­ment teams, inclu­ding gen­der balan­ce. To ensu­re that the vol­un­t­a­ry codes of con­duct are effec­ti­ve, they should be based on clear objec­ti­ves and key per­for­mance indi­ca­tors to mea­su­re the achie­ve­ment of tho­se objec­ti­ves. They should also be deve­lo­ped in an inclu­si­ve way, as appro­pria­te, with the invol­vement of rele­vant stake­hol­ders such as busi­ness and civil socie­ty orga­ni­sa­ti­ons, aca­de­mia, rese­arch orga­ni­sa­ti­ons, trade uni­ons and con­su­mer pro­tec­tion orga­ni­sa­ti­on. The Com­mis­si­on may deve­lop initia­ti­ves, inclu­ding of a sec­to­ral natu­re, to faci­li­ta­te the lowe­ring of tech­ni­cal bar­riers hin­de­ring cross-bor­der exch­an­ge of data for AI deve­lo­p­ment, inclu­ding on data access infras­truc­tu­re, seman­tic and tech­ni­cal inter­ope­ra­bi­li­ty of dif­fe­rent types of data. 

Artic­le 96 Gui­de­lines from the Com­mis­si­on on the imple­men­ta­ti­on of this Regulation

1. The Com­mis­si­on shall deve­lop gui­de­lines on the prac­ti­cal imple­men­ta­ti­on of this Regu­la­ti­on, and in par­ti­cu­lar on:

(a) the appli­ca­ti­on of the requi­re­ments and obli­ga­ti­ons refer­red to in Artic­les 8 to 15 and in Artic­le 25;

(b) the pro­hi­bi­ted prac­ti­ces refer­red to in Artic­le 5;

(c) the prac­ti­cal imple­men­ta­ti­on of the pro­vi­si­ons rela­ted to sub­stan­ti­al modification;

(d) the prac­ti­cal imple­men­ta­ti­on of trans­pa­ren­cy obli­ga­ti­ons laid down in Artic­le 50;

(e) detail­ed infor­ma­ti­on on the rela­ti­on­ship of this Regu­la­ti­on with the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Annex I, as well as with other rele­vant Uni­on law, inclu­ding as regards con­si­sten­cy in their enforcement;

(f) the appli­ca­ti­on of the defi­ni­ti­on of an AI system as set out in Artic­le 3, point (1).

When issuing such gui­de­lines, the Com­mis­si­on shall pay par­ti­cu­lar atten­ti­on to the needs of SMEs inclu­ding start-ups, of local public aut­ho­ri­ties and of the sec­tors most likely to be affec­ted by this Regulation.

The gui­de­lines refer­red to in the first sub­pa­ra­graph of this para­graph shall take due account of the gene­ral­ly ack­now­led­ged sta­te of the art on AI, as well as of rele­vant har­mo­ni­s­ed stan­dards and com­mon spe­ci­fi­ca­ti­ons that are refer­red to in Artic­les 40 and 41, or of tho­se har­mo­ni­s­ed stan­dards or tech­ni­cal spe­ci­fi­ca­ti­ons that are set out pur­su­ant to Uni­on har­mo­ni­sa­ti­on law.

2. At the request of the Mem­ber Sta­tes or the AI Office, or on its own initia­ti­ve, the Com­mis­si­on shall update gui­de­lines pre­vious­ly adopted when dee­med necessary.

Chap­ter XI Dele­ga­ti­on of power and com­mit­tee procedure

Artic­le 97 Exer­cise of the delegation

1. The power to adopt dele­ga­ted acts is con­fer­red on the Com­mis­si­on sub­ject to the con­di­ti­ons laid down in this Article.

2. The power to adopt dele­ga­ted acts refer­red to in Artic­le 6(6) and (7), Artic­le 7(1) and (3), Artic­le 11(3), Artic­le 43(5) and (6), Artic­le 47(5), Artic­le 51(3), Artic­le 52(4) and Artic­le 53(5) and (6) shall be con­fer­red on the Com­mis­si­on for a peri­od of five years from … [date of ent­ry into force of this Regu­la­ti­on]. The Com­mis­si­on shall draw up a report in respect of the dele­ga­ti­on of power not later than nine months befo­re the end of the five- year peri­od. The dele­ga­ti­on of power shall be taci­t­ly exten­ded for peri­ods of an iden­ti­cal dura­ti­on, unless the Euro­pean Par­lia­ment or the Coun­cil oppo­ses such exten­si­on not later than three months befo­re the end of each period.

3. The dele­ga­ti­on of power refer­red to in Artic­le 6(6) and (7), Artic­le 7(1) and (3), Artic­le 11(3), Artic­le 43(5) and (6), Artic­le 47(5), Artic­le 51(3), Artic­le 52(4) and Artic­le 53(5) and (6) may be revo­ked at any time by the Euro­pean Par­lia­ment or by the Coun­cil. A decis­i­on of revo­ca­ti­on shall put an end to the dele­ga­ti­on of power spe­ci­fi­ed in that decis­i­on. It shall take effect the day fol­lo­wing that of its publi­ca­ti­on in the Offi­ci­al Jour­nal of the Euro­pean Uni­on or at a later date spe­ci­fi­ed the­r­ein. It shall not affect the vali­di­ty of any dele­ga­ted acts alre­a­dy in force.

4. Befo­re adop­ting a dele­ga­ted act, the Com­mis­si­on shall con­sult experts desi­gna­ted by each Mem­ber Sta­te in accordance with the prin­ci­ples laid down in the Inter­in­sti­tu­tio­nal Agree­ment of 13 April 2016 on Bet­ter Law-Making.

5. As soon as it adopts a dele­ga­ted act, the Com­mis­si­on shall noti­fy it simul­ta­neous­ly to the Euro­pean Par­lia­ment and to the Council.

6. Any dele­ga­ted act adopted pur­su­ant to Artic­le 6(6) or (7), Artic­le 7(1) or (3), Artic­le 11(3), Artic­le 43(5) or (6), Artic­le 47(5), Artic­le 51(3), Artic­le 52(4) or Artic­le 53(5) or (6) shall enter into force only if no objec­tion has been expres­sed by eit­her the Euro­pean Par­lia­ment or the Coun­cil within a peri­od of three months of noti­fi­ca­ti­on of that act to the Euro­pean Par­lia­ment and the Coun­cil or if, befo­re the expiry of that peri­od, the Euro­pean Par­lia­ment and the Coun­cil have both infor­med the Com­mis­si­on that they will not object. That peri­od shall be exten­ded by three months at the initia­ti­ve of the Euro­pean Par­lia­ment or of the Council.

(173) In order to ensu­re that the regu­la­to­ry frame­work can be adapt­ed whe­re neces­sa­ry, the power to adopt acts in accordance with Artic­le 290 TFEU should be dele­ga­ted to the Com­mis­si­on to amend the con­di­ti­ons under which an AI system is not to be con­side­red to be high-risk, the list of high-risk AI systems, the pro­vi­si­ons regar­ding tech­ni­cal docu­men­ta­ti­on, the con­tent of the EU decla­ra­ti­on of con­for­mi­ty the pro­vi­si­ons regar­ding the con­for­mi­ty assess­ment pro­ce­du­res, the pro­vi­si­ons estab­li­shing the high-risk AI systems to which the con­for­mi­ty assess­ment pro­ce­du­re based on assess­ment of the qua­li­ty manage­ment system and assess­ment of the tech­ni­cal docu­men­ta­ti­on should app­ly, the thres­hold, bench­marks and indi­ca­tors, inclu­ding by sup­ple­men­ting tho­se bench­marks and indi­ca­tors, in the rules for the clas­si­fi­ca­ti­on of gene­ral-pur­po­se AI models with syste­mic risk, the cri­te­ria for the desi­gna­ti­on of gene­ral-pur­po­se AI models with syste­mic risk, the tech­ni­cal docu­men­ta­ti­on for pro­vi­ders of gene­ral-pur­po­se AI models and the trans­pa­ren­cy infor­ma­ti­on for pro­vi­ders of gene­ral-pur­po­se AI models. It is of par­ti­cu­lar importance that the Com­mis­si­on car­ry out appro­pria­te con­sul­ta­ti­ons during its pre­pa­ra­to­ry work, inclu­ding at expert level, and that tho­se con­sul­ta­ti­ons be con­duc­ted in accordance with the prin­ci­ples laid down in the Inter­in­sti­tu­tio­nal Agree­ment of 13 April 2016 on Bet­ter Law-Making . In par­ti­cu­lar, to ensu­re equal par­ti­ci­pa­ti­on in the pre­pa­ra­ti­on of dele­ga­ted acts, the Euro­pean Par­lia­ment and the Coun­cil recei­ve all docu­ments at the same time as Mem­ber Sta­tes’ experts, and their experts syste­ma­ti­cal­ly have access to mee­tings of Com­mis­si­on expert groups deal­ing with the pre­pa­ra­ti­on of dele­ga­ted acts.

(175) In order to ensu­re uni­form con­di­ti­ons for the imple­men­ta­ti­on of this Regu­la­ti­on, imple­men­ting powers should be con­fer­red on the Com­mis­si­on. Tho­se powers should be exer­cis­ed in accordance with Regu­la­ti­on (EU) No 182/2011 of the Euro­pean Par­lia­ment and of the Council.

Artic­le 98 Com­mit­tee procedure

1. The Com­mis­si­on shall be assi­sted by a com­mit­tee. That com­mit­tee shall be a com­mit­tee within the mea­ning of Regu­la­ti­on (EU) No 182/2011.

2. Whe­re refe­rence is made to this para­graph, Artic­le 5 of Regu­la­ti­on (EU) No 182/2011 shall apply.

Chap­ter XII Penalties

Artic­le 99 Penalties

1. In accordance with the terms and con­di­ti­ons laid down in this Regu­la­ti­on, Mem­ber Sta­tes shall lay down the rules on pen­al­ties and other enforce­ment mea­su­res, which may also include war­nings and non-mone­ta­ry mea­su­res, appli­ca­ble to inf­rin­ge­ments of this Regu­la­ti­on by ope­ra­tors, and shall take all mea­su­res neces­sa­ry to ensu­re that they are pro­per­ly and effec­tively imple­men­ted, ther­eby taking into account the gui­de­lines issued by the Com­mis­si­on pur­su­ant to Artic­le 96. The pen­al­ties pro­vi­ded for shall be effec­ti­ve, pro­por­tio­na­te and dissua­si­ve. They shall take into account the inte­rests of SMEs, inclu­ding start-ups, and their eco­no­mic viability. 

2. The Mem­ber Sta­tes shall, wit­hout delay and at the latest by the date of ent­ry into appli­ca­ti­on, noti­fy the Com­mis­si­on of the rules on pen­al­ties and of other enforce­ment mea­su­res refer­red to in para­graph 1, and shall noti­fy it, wit­hout delay, of any sub­se­quent amend­ment to them.

3. Non-com­pli­ance with the pro­hi­bi­ti­on of the AI prac­ti­ces refer­red to in Artic­le 5 shall be sub­ject to admi­ni­stra­ti­ve fines of up to 35 000 000 EUR or, if the offen­der is an under­ta­king, up to 7 % of its total world­wi­de annu­al tur­no­ver for the pre­ce­ding finan­cial year, whi­che­ver is higher.

4. Non-com­pli­ance with any of the fol­lo­wing pro­vi­si­ons rela­ted to ope­ra­tors or noti­fi­ed bodies, other than tho­se laid down in Artic­les 5 , shall be sub­ject to admi­ni­stra­ti­ve fines of up to 15 000 000 EUR or, if the offen­der is an under­ta­king, up to 3 % of its total world­wi­de annu­al tur­no­ver for the pre­ce­ding finan­cial year, whi­che­ver is higher:

(a) obli­ga­ti­ons of pro­vi­ders pur­su­ant to Artic­le 16;

(b) obli­ga­ti­ons of aut­ho­ri­sed repre­sen­ta­ti­ves pur­su­ant to Artic­le 22;

(c) obli­ga­ti­ons of importers pur­su­ant to Artic­le 23;

(d) obli­ga­ti­ons of dis­tri­bu­tors pur­su­ant to Artic­le 24;

(e) obli­ga­ti­ons of deployers pur­su­ant to Artic­le 26;

(f) requi­re­ments and obli­ga­ti­ons of noti­fi­ed bodies pur­su­ant to Artic­le 31, Artic­le 33(1), (3) and (4) or Artic­le 34;

(g) trans­pa­ren­cy obli­ga­ti­ons for pro­vi­ders and deployers pur­su­ant to Artic­le 50.

5. The sup­p­ly of incor­rect, incom­ple­te or mis­lea­ding infor­ma­ti­on to noti­fi­ed bodies or natio­nal com­pe­tent aut­ho­ri­ties in rep­ly to a request shall be sub­ject to admi­ni­stra­ti­ve fines of up to 7 500 000 EUR or, if the offen­der is an under­ta­king, up to 1 % of its total world­wi­de annu­al tur­no­ver for the pre­ce­ding finan­cial year, whi­che­ver is higher.

6. In the case of SMEs, inclu­ding start-ups, each fine refer­red to in this Artic­le shall be up to the per­cen­ta­ges or amount refer­red to in para­graphs 3, 4 and 5, whi­che­ver the­reof is lower. 

When deci­ding whe­ther to impo­se an admi­ni­stra­ti­ve fine and when deci­ding on the amount of the admi­ni­stra­ti­ve fine in each indi­vi­du­al case, all rele­vant cir­cum­stances of the spe­ci­fic situa­ti­on shall be taken into account and, as appro­pria­te, regard shall be given to the following:

(a) the natu­re, gra­vi­ty and dura­ti­on of the inf­rin­ge­ment and of its con­se­quen­ces, taking into account the pur­po­se of the AI system, as well as, whe­re appro­pria­te, the num­ber of affec­ted per­sons and the level of dama­ge suf­fe­r­ed by them;

(b) whe­ther admi­ni­stra­ti­ve fines have alre­a­dy been applied by other mar­ket sur­veil­lan­ce aut­ho­ri­ties to the same ope­ra­tor for the same infringement;

(c) whe­ther admi­ni­stra­ti­ve fines have alre­a­dy been applied by other aut­ho­ri­ties to the same ope­ra­tor for inf­rin­ge­ments of other Uni­on or natio­nal law, when such inf­rin­ge­ments result from the same acti­vi­ty or omis­si­on con­sti­tu­ting a rele­vant inf­rin­ge­ment of this Regulation;

(d) the size, the annu­al tur­no­ver and mar­ket share of the ope­ra­tor com­mit­ting the infringement;

(e) any other aggravating or miti­ga­ting fac­tor appli­ca­ble to the cir­cum­stances of the case, such as finan­cial bene­fits gai­ned, or los­ses avo­ided, direct­ly or indi­rect­ly, from the infringement;

(f) the degree of coope­ra­ti­on with the natio­nal com­pe­tent aut­ho­ri­ties, in order to reme­dy the inf­rin­ge­ment and miti­ga­te the pos­si­ble adver­se effects of the infringement;

(g) the degree of respon­si­bi­li­ty of the ope­ra­tor taking into account the tech­ni­cal and orga­ni­sa­tio­nal mea­su­res imple­men­ted by it;

(h) the man­ner in which the inf­rin­ge­ment beca­me known to the natio­nal com­pe­tent aut­ho­ri­ties, in par­ti­cu­lar whe­ther, and if so to what ext­ent, the ope­ra­tor noti­fi­ed the infringement;

(i) the inten­tio­nal or negli­gent cha­rac­ter of the infringement;

(j) any action taken by the ope­ra­tor to miti­ga­te the harm suf­fe­r­ed by the affec­ted persons.

8. Each Mem­ber Sta­te shall lay down rules on to what ext­ent admi­ni­stra­ti­ve fines may be impo­sed on public aut­ho­ri­ties and bodies estab­lished in that Mem­ber State.

9. Depen­ding on the legal system of the Mem­ber Sta­tes, the rules on admi­ni­stra­ti­ve fines may be applied in such a man­ner that the fines are impo­sed by com­pe­tent natio­nal courts or by other bodies, as appli­ca­ble in tho­se Mem­ber Sta­tes. The appli­ca­ti­on of such rules in tho­se Mem­ber Sta­tes shall have an equi­va­lent effect.

10. The exer­cise of powers under this Artic­le shall be sub­ject to appro­pria­te pro­ce­du­ral safe­guards in accordance with Uni­on and natio­nal law, inclu­ding effec­ti­ve judi­cial reme­dies and due process.

11. Mem­ber Sta­tes shall, on an annu­al basis, report to the Com­mis­si­on about the admi­ni­stra­ti­ve fines they have issued during that year, in accordance with this Artic­le, and about any rela­ted liti­ga­ti­on or judi­cial proceedings.

(168) Com­pli­ance with this Regu­la­ti­on should be enforceable by means of the impo­si­ti­on of pen­al­ties and other enforce­ment mea­su­res. Mem­ber Sta­tes should take all neces­sa­ry mea­su­res to ensu­re that the pro­vi­si­ons of this Regu­la­ti­on are imple­men­ted, inclu­ding by lay­ing down effec­ti­ve, pro­por­tio­na­te and dissua­si­ve pen­al­ties for their inf­rin­ge­ment, and to respect the ne bis in idem prin­ci­ple. In order to streng­then and har­mo­ni­se admi­ni­stra­ti­ve pen­al­ties for inf­rin­ge­ment of this Regu­la­ti­on, the upper limits for set­ting the admi­ni­stra­ti­ve fines for cer­tain spe­ci­fic inf­rin­ge­ments should be laid down. When asses­sing the amount of the fines, Mem­ber Sta­tes should, in each indi­vi­du­al case, take into account all rele­vant cir­cum­stances of the spe­ci­fic situa­ti­on, with due regard in par­ti­cu­lar to the natu­re, gra­vi­ty and dura­ti­on of the inf­rin­ge­ment and of its con­se­quen­ces and to the size of the pro­vi­der, in par­ti­cu­lar if the pro­vi­der is an SME, inclu­ding a start-up. The Euro­pean Data Pro­tec­tion Super­vi­sor should have the power to impo­se fines on Uni­on insti­tu­ti­ons, agen­ci­es and bodies fal­ling within the scope of this Regulation. 

Artic­le 100 Admi­ni­stra­ti­ve fines on Uni­on insti­tu­ti­ons, bodies, offices and agencies

1. The Euro­pean Data Pro­tec­tion Super­vi­sor may impo­se admi­ni­stra­ti­ve fines on Uni­on insti­tu­ti­ons, bodies, offices and agen­ci­es fal­ling within the scope of this Regu­la­ti­on. When deci­ding whe­ther to impo­se an admi­ni­stra­ti­ve fine and when deci­ding on the amount of the admi­ni­stra­ti­ve fine in each indi­vi­du­al case, all rele­vant cir­cum­stances of the spe­ci­fic situa­ti­on shall be taken into account and due regard shall be given to the following:

(a) the natu­re, gra­vi­ty and dura­ti­on of the inf­rin­ge­ment and of its con­se­quen­ces, taking into account the pur­po­se of the AI system con­cer­ned, as well as, whe­re appro­pria­te, the num­ber of affec­ted per­sons and the level of dama­ge suf­fe­r­ed by them;

(b) the degree of respon­si­bi­li­ty of the Uni­on insti­tu­ti­on, body, office or agen­cy, taking into account tech­ni­cal and orga­ni­sa­tio­nal mea­su­res imple­men­ted by them;

(c) any action taken by the Uni­on insti­tu­ti­on, body, office or agen­cy to miti­ga­te the dama­ge suf­fe­r­ed by affec­ted persons;

(d) the degree of coope­ra­ti­on with the Euro­pean Data Pro­tec­tion Super­vi­sor in order to reme­dy the inf­rin­ge­ment and miti­ga­te the pos­si­ble adver­se effects of the inf­rin­ge­ment, inclu­ding com­pli­ance with any of the mea­su­res pre­vious­ly orde­red by the Euro­pean Data Pro­tec­tion Super­vi­sor against the Uni­on insti­tu­ti­on, body, office or agen­cy con­cer­ned with regard to the same sub­ject matter;

(e) any simi­lar pre­vious inf­rin­ge­ments by the Uni­on insti­tu­ti­on, body, office or agency;

(f) the man­ner in which the inf­rin­ge­ment beca­me known to the Euro­pean Data Pro­tec­tion Super­vi­sor, in par­ti­cu­lar whe­ther, and if so to what ext­ent, the Uni­on insti­tu­ti­on, body, office or agen­cy noti­fi­ed the infringement;

(g) the annu­al bud­get of the Uni­on insti­tu­ti­on, body, office or agency. 

2. Non-com­pli­ance with the pro­hi­bi­ti­on of the AI prac­ti­ces refer­red to in Artic­le 5 shall be sub­ject to admi­ni­stra­ti­ve fines of up to EUR 1 500 000.

3. The non-com­pli­ance of the AI system with any requi­re­ments or obli­ga­ti­ons under this Regu­la­ti­on, other than tho­se laid down in Artic­le 5 , shall be sub­ject to admi­ni­stra­ti­ve fines of up to EUR 750 000.

4. Befo­re taking decis­i­ons pur­su­ant to this Artic­le, the Euro­pean Data Pro­tec­tion Super­vi­sor shall give the Uni­on insti­tu­ti­on, body, office or agen­cy which is the sub­ject of the pro­ce­e­dings con­duc­ted by the Euro­pean Data Pro­tec­tion Super­vi­sor the oppor­tu­ni­ty of being heard on the mat­ter regar­ding the pos­si­ble inf­rin­ge­ment. The Euro­pean Data Pro­tec­tion Super­vi­sor shall base his or her decis­i­ons only on ele­ments and cir­cum­stances on which the par­ties con­cer­ned have been able to com­ment. Com­plainants, if any, shall be asso­cia­ted clo­se­ly with the proceedings. 

5. The rights of defence of the par­ties con­cer­ned shall be ful­ly respec­ted in the pro­ce­e­dings. They shall be entit­led to have access to the Euro­pean Data Pro­tec­tion Supervisor’s file, sub­ject to the legi­ti­ma­te inte­rest of indi­vi­du­als or under­ta­kings in the pro­tec­tion of their per­so­nal data or busi­ness secrets.

6. Funds coll­ec­ted by impo­si­ti­on of fines in this Artic­le shall con­tri­bu­te to the gene­ral bud­get of the Uni­on. The fines shall not affect the effec­ti­ve ope­ra­ti­on of the Uni­on insti­tu­ti­on, body, office or agen­cy fined.

7. The Euro­pean Data Pro­tec­tion Super­vi­sor shall, on an annu­al basis, noti­fy the Com­mis­si­on of the admi­ni­stra­ti­ve fines it has impo­sed pur­su­ant to this Artic­le and of any liti­ga­ti­on or judi­cial pro­ce­e­dings it has initiated.

(169) Com­pli­ance with the obli­ga­ti­ons on pro­vi­ders of gene­ral-pur­po­se AI models impo­sed­under this Regu­la­ti­on should be enforceable, inter alia, by means of fines. To that end, appro­pria­te levels of fines should also be laid down for inf­rin­ge­ment of tho­se obli­ga­ti­ons, inclu­ding the fail­ure to com­ply with mea­su­res reque­sted by the Com­mis­si­on in accordance with this Regu­la­ti­on, sub­ject to appro­pria­te limi­ta­ti­on peri­ods in accordance with the prin­ci­ple of pro­por­tio­na­li­ty. All decis­i­ons taken by the Com­mis­si­on under this Regu­la­ti­on are sub­ject to review by the Court of Justi­ce of the Euro­pean Uni­on in accordance with the TFEU, inclu­ding the unli­mi­t­ed juris­dic­tion of the Court of Justi­ce with regard to pen­al­ties pur­su­ant to Artic­le 261 TFEU.

Artic­le 101 Fines for pro­vi­ders of gene­ral-pur­po­se AI models

1. The Com­mis­si­on may impo­se on pro­vi­ders of gene­ral-pur­po­se AI models fines not exce­e­ding 3 % of their annu­al total world­wi­de tur­no­ver in the pre­ce­ding finan­cial year or EUR 15 000 000, whi­che­ver is hig­her., when the Com­mis­si­on finds that the pro­vi­der inten­tio­nal­ly or negligently:

(a) inf­rin­ged the rele­vant pro­vi­si­ons of this Regulation;

(b) fai­led to com­ply with a request for a docu­ment or for infor­ma­ti­on pur­su­ant to

Artic­le 91, or sup­plied incor­rect, incom­ple­te or mis­lea­ding information;

(c) fai­led to com­ply with a mea­su­re reque­sted under Artic­le 93;

(d) fai­led to make available to the Com­mis­si­on access to the gene­ral-pur­po­se AI model or gene­ral-pur­po­se AI model with syste­mic risk with a view to con­duc­ting an eva­lua­ti­on pur­su­ant to Artic­le 92.

In fixing the amount of the fine or peri­odic penal­ty payment, regard shall be had to the natu­re, gra­vi­ty and dura­ti­on of the inf­rin­ge­ment, taking due account of the prin­ci­ples of pro­por­tio­na­li­ty and appro­pria­ten­ess. The Com­mis­si­on shall also into account com­mit­ments made in accordance with Artic­le 93(3) or made in rele­vant codes of prac­ti­ce in accordance with Artic­le 56.

2. Befo­re adop­ting the decis­i­on pur­su­ant to para­graph 1, the Com­mis­si­on shall com­mu­ni­ca­te its preli­mi­na­ry fin­dings to the pro­vi­der of the gene­ral-pur­po­se AI model and give it an oppor­tu­ni­ty to be heard.

3. Fines impo­sed in accordance with this Artic­le shall be effec­ti­ve, pro­por­tio­na­te and dissuasive.

4. Infor­ma­ti­on on fines impo­sed under this Artic­le shall also be com­mu­ni­ca­ted to the Board as appropriate.

5. The Court of Justi­ce of the Euro­pean Uni­on shall have unli­mi­t­ed juris­dic­tion to review decis­i­ons of the Com­mis­si­on fixing a fine under this Artic­le. It may can­cel, redu­ce or increa­se the fine imposed.

6. The Com­mis­si­on shall adopt imple­men­ting acts con­tai­ning detail­ed arran­ge­ments and pro­ce­du­ral safe­guards for pro­ce­e­dings in view of the pos­si­ble adop­ti­on of decis­i­ons pur­su­ant to para­graph 1 of this Artic­le. Tho­se imple­men­ting acts shall be adopted in accordance with the exami­na­ti­on pro­ce­du­re refer­red to in Artic­le 98(2).

Chap­ter XIII Final provisions

Artic­le 102 Amend­ment to Regu­la­ti­on (EC) No 300/2008

In Artic­le 4(3) of Regu­la­ti­on (EC) No 300/2008, the fol­lo­wing sub­pa­ra­graph is added:

When adop­ting detail­ed mea­su­res rela­ted to tech­ni­cal spe­ci­fi­ca­ti­ons and pro­ce­du­res for appr­oval and use of secu­ri­ty equip­ment con­cer­ning Arti­fi­ci­al Intel­li­gence systems within the mea­ning of Regu­la­ti­on (EU) 2024/… of the Euro­pean Par­lia­ment and of the Coun­cil+, the requi­re­ments set out in Chap­ter III, Sec­tion 2, of that Regu­la­ti­on shall be taken into account.

Regu­la­ti­on (EU) 2024/… of the Euro­pean Par­lia­ment and of the Coun­cil of … lay­ing down har­mo­ni­s­ed rules on arti­fi­ci­al intel­li­gence and amen­ding Regu­la­ti­ons (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Direc­ti­ves 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Arti­fi­ci­al Intel­li­gence Act) (OJ L, …, ELI: …).’

Artic­le 103 Amend­ment to Regu­la­ti­on (EU) No 167/2013

In Artic­le 17(5) of Regu­la­ti­on (EU) No 167/2013, the fol­lo­wing sub­pa­ra­graph is added: ‘When adop­ting dele­ga­ted acts pur­su­ant to the first sub­pa­ra­graph con­cer­ning arti­fi­ci­al intel­li­gence systems which are safe­ty com­pon­ents within the mea­ning of Regu­la­ti­on (EU) 2024/… of the Euro­pean Par­lia­ment and of the Coun­cil+, the requi­re­ments set out in Chap­ter III, Sec­tion 2, of that Regu­la­ti­on shall be taken into account.

Regu­la­ti­on (EU) 2024/… of the Euro­pean Par­lia­ment and of the Coun­cil of … lay­ing down har­mo­ni­s­ed rules on arti­fi­ci­al intel­li­gence and amen­ding Regu­la­ti­ons (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Direc­ti­ves 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Arti­fi­ci­al Intel­li­gence Act) (OJ L, …, ELI: …).’

Artic­le 104 Amend­ment to Regu­la­ti­on (EU) No 168/2013

In Artic­le 22(5) of Regu­la­ti­on (EU) No 168/2013, the fol­lo­wing sub­pa­ra­graph is added: ‘When adop­ting dele­ga­ted acts pur­su­ant to the first sub­pa­ra­graph con­cer­ning Arti­fi­ci­al Intel­li­gence systems which are safe­ty com­pon­ents within the mea­ning of Regu­la­ti­on (EU) 2024/… of the Euro­pean Par­lia­ment and of the Coun­cil+, the requi­re­ments set out in Chap­ter III, Sec­tion 2, of that Regu­la­ti­on shall be taken into account.

Regu­la­ti­on (EU) 2024/… of the Euro­pean Par­lia­ment and of the Coun­cil of … lay­ing down har­mo­ni­s­ed rules on arti­fi­ci­al intel­li­gence and amen­ding Regu­la­ti­ons (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Direc­ti­ves 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Arti­fi­ci­al Intel­li­gence Act) (OJ L, …, ELI: …).’

Artic­le 105 Amend­ment to Direc­ti­ve 2014/90/EU

In Artic­le 8 of Direc­ti­ve 2014/90/EU, the fol­lo­wing para­graph is added:

5. For Arti­fi­ci­al Intel­li­gence systems which are safe­ty com­pon­ents within the mea­ning of Regu­la­ti­on (EU) 2024/… of the Euro­pean Par­lia­ment and of the Coun­cil+, when car­ry­ing out its acti­vi­ties pur­su­ant to para­graph 1 and when adop­ting tech­ni­cal spe­ci­fi­ca­ti­ons and test­ing stan­dards in accordance with para­graphs 2 and 3, the Com­mis­si­on shall take into account the requi­re­ments set out in Chap­ter III, Sec­tion 2, of that Regulation.

Regu­la­ti­on (EU) 2024/… of the Euro­pean Par­lia­ment and of the Coun­cil of … lay­ing down har­mo­ni­s­ed rules on arti­fi­ci­al intel­li­gence and amen­ding Regu­la­ti­ons (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Direc­ti­ves 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Arti­fi­ci­al Intel­li­gence Act) (OJ L, …, ELI: …).’

Artic­le 106 Amend­ment to Direc­ti­ve (EU) 2016/797

In Artic­le 5 of Direc­ti­ve (EU) 2016/797, the fol­lo­wing para­graph is added:

12. When adop­ting dele­ga­ted acts pur­su­ant to para­graph 1 and imple­men­ting acts pur­su­ant to para­graph 11 con­cer­ning Arti­fi­ci­al Intel­li­gence systems which are safe­ty com­pon­ents within the mea­ning of Regu­la­ti­on (EU) 2024/… of the Euro­pean Par­lia­ment and of the Coun­cil+, the requi­re­ments set out in Chap­ter III, Sec­tion 2, of that Regu­la­ti­on shall be taken into account.

Regu­la­ti­on (EU) 2024/… of the Euro­pean Par­lia­ment and of the Coun­cil of … lay­ing down har­mo­ni­s­ed rules on arti­fi­ci­al intel­li­gence and amen­ding Regu­la­ti­ons (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Direc­ti­ves 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Arti­fi­ci­al Intel­li­gence Act) (OJ L, …, ELI: …).’

Artic­le 107 Amend­ment to Regu­la­ti­on (EU) 2018/858

In Artic­le 5 of Regu­la­ti­on (EU) 2018/858 the fol­lo­wing para­graph is added:

4. When adop­ting dele­ga­ted acts pur­su­ant to para­graph 3 con­cer­ning Arti­fi­ci­al Intel­li­gence systems which are safe­ty com­pon­ents within the mea­ning of Regu­la­ti­on (EU) 2024/… of the Euro­pean Par­lia­ment and of the Coun­cil, the requi­re­ments set out in Chap­ter III, Sec­tion 2, of that Regu­la­ti­on shall be taken into account.

Regu­la­ti­on (EU) 2024/… of the Euro­pean Par­lia­ment and of the Coun­cil of … lay­ing down har­mo­ni­s­ed rules on arti­fi­ci­al intel­li­gence and amen­ding Regu­la­ti­ons (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Direc­ti­ves 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Arti­fi­ci­al Intel­li­gence Act) (OJ L, …, ELI: …).’

Artic­le 108 Amend­ments to Regu­la­ti­on (EU) 2018/1139

Regu­la­ti­on (EU) 2018/1139 is amen­ded as follows:

(1) in Artic­le 17, the fol­lo­wing para­graph is added:

3. Wit­hout pre­ju­di­ce to para­graph 2, when adop­ting imple­men­ting acts pur­su­ant to para­graph 1 con­cer­ning Arti­fi­ci­al Intel­li­gence systems which are safe­ty com­pon­ents within the mea­ning of Regu­la­ti­on (EU) 2024/… of the Euro­pean Par­lia­ment and of the Coun­cil, the requi­re­ments set out in Chap­ter III, Sec­tion 2, of that Regu­la­ti­on shall be taken into account.

  • Regu­la­ti­on (EU) 2024/… of the Euro­pean Par­lia­ment and of the Coun­cil of … lay­ing down har­mo­ni­s­ed rules on arti­fi­ci­al intel­li­gence and amen­ding Regu­la­ti­ons (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Direc­ti­ves 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Arti­fi­ci­al Intel­li­gence Act) (OJ L, …, ELI: …).’;

(2) in Artic­le 19, the fol­lo­wing para­graph is added:

4. When adop­ting dele­ga­ted acts pur­su­ant to para­graphs 1 and 2 con­cer­ning Arti­fi­ci­al Intel­li­gence systems which are safe­ty com­pon­ents within the mea­ning of Regu­la­ti­on (EU) 2024/…++, the requi­re­ments set out in Chap­ter III, Sec­tion 2, of that Regu­la­ti­on shall be taken into account.’;

(3) in Artic­le 43, the fol­lo­wing para­graph is added:

4. When adop­ting imple­men­ting acts pur­su­ant to para­graph 1 con­cer­ning Arti­fi­ci­al Intel­li­gence systems which are safe­ty com­pon­ents within the mea­ning of Regu­la­ti­on (EU) 2024/…+, the requi­re­ments set out in Chap­ter III, Sec­tion 2, of that Regu­la­ti­on shall be taken into account.’;

(4) in Artic­le 47, the fol­lo­wing para­graph is added:

3. When adop­ting dele­ga­ted acts pur­su­ant to para­graphs 1 and 2 con­cer­ning Arti­fi­ci­al Intel­li­gence systems which are safe­ty com­pon­ents within the mea­ning of Regu­la­ti­on (EU) 2024/…+, the requi­re­ments set out in Chap­ter III, Sec­tion 2, of that Regu­la­ti­on shall be taken into account.’;

(5) in Artic­le 57, the fol­lo­wing sub­pa­ra­graph is added:

When adop­ting tho­se imple­men­ting acts con­cer­ning Arti­fi­ci­al Intel­li­gence systems which are safe­ty com­pon­ents within the mea­ning of Regu­la­ti­on (EU) 2024/…+, the requi­re­ments set out in Chap­ter III, Sec­tion 2, of that Regu­la­ti­on shall be taken into account.’;

(6) in Artic­le 58, the fol­lo­wing para­graph is added:

3. When adop­ting dele­ga­ted acts pur­su­ant to para­graphs 1 and 2 con­cer­ning Arti­fi­ci­al Intel­li­gence systems which are safe­ty com­pon­ents within the mea­ning of Regu­la­ti­on (EU) 2024/…+, the requi­re­ments set out in Chap­ter III, Sec­tion 2, of that Regu­la­ti­on shall be taken into account.’.

Artic­le 109 Amend­ment to Regu­la­ti­on (EU) 2019/2144

In Artic­le 11 of Regu­la­ti­on (EU) 2019/2144, the fol­lo­wing para­graph is added:

3. When adop­ting the imple­men­ting acts pur­su­ant to para­graph 2, con­cer­ning arti­fi­ci­al intel­li­gence systems which are safe­ty com­pon­ents within the mea­ning of Regu­la­ti­on (EU) 2024/… of the Euro­pean Par­lia­ment and of the Coun­cil, the requi­re­ments set out in Chap­ter III, Sec­tion 2, of that Regu­la­ti­on shall be taken into account.

Regu­la­ti­on (EU) 2024/… of the Euro­pean Par­lia­ment and of the Coun­cil of … lay­ing down har­mo­ni­s­ed rules on arti­fi­ci­al intel­li­gence and amen­ding Regu­la­ti­ons (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Direc­ti­ves 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Arti­fi­ci­al Intel­li­gence Act) (OJ L, …, ELI: …).’

Artic­le 110 Amend­ment to Direc­ti­ve (EU) 2020/1828

In Annex I to Direc­ti­ve (EU) 2020/1828 of the Euro­pean Par­lia­ment and of the Council58, the fol­lo­wing point is added:

(68) Regu­la­ti­on (EU) 2024/… of the Euro­pean Par­lia­ment and of the Coun­cil of … + lay­ing down har­mo­ni­s­ed rules on arti­fi­ci­al intel­li­gence and amen­ding Regu­la­ti­ons (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Direc­ti­ves 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Arti­fi­ci­al Intel­li­gence Act) (OJ L, …, ELI: …)’.

Direc­ti­ve (EU) 2020/1828 of the Euro­pean Par­lia­ment and of the Coun­cil of 25 Novem­ber 2020 on repre­sen­ta­ti­ve actions for the pro­tec­tion of the coll­ec­ti­ve inte­rests of con­su­mers and repe­al­ing Direc­ti­ve 2009/22/EC (OJ L 409, 4.12.2020, p. 1).

Artic­le 111 AI systems alre­a­dy pla­ced on the mar­ket or put into ser­vice and gene­ral-pur­po­se AI models already

pla­ced on the marked

1. Wit­hout pre­ju­di­ce to the appli­ca­ti­on of Artic­le 5 as refer­red to in Artic­le 113(3), point (a), AI systems which are com­pon­ents of the lar­ge-sca­le IT systems estab­lished by the legal acts listed in Annex X that have been pla­ced on the mar­ket or put into ser­vice befo­re … [36 months from the date of ent­ry into force of this Regu­la­ti­on] shall be brought into com­pli­ance with this Regu­la­ti­on by 31 Decem­ber 2030.

The requi­re­ments laid down in this Regu­la­ti­on shall be taken into account in the eva­lua­ti­on of each lar­ge-sca­le IT system estab­lished by the legal acts listed in Annex X to be under­ta­ken as pro­vi­ded for in tho­se legal acts and whe­re tho­se legal acts are repla­ced or amended. 

2. Wit­hout pre­ju­di­ce to the appli­ca­ti­on of Artic­le 5 as refer­red to in Artic­le 113(3), point (a), this Regu­la­ti­on shall app­ly to ope­ra­tors of high-risk AI systems, other than the systems refer­red to in para­graph 1 of this Artic­le, that have been pla­ced on the mar­ket or put into ser­vice befo­re … [24 months from the date of ent­ry into force of this Regu­la­ti­on], only if, as from that date, tho­se systems are sub­ject to signi­fi­cant chan­ges in their designs. In any case, the pro­vi­ders and deployers of high-risk AI systems inten­ded to be used by public aut­ho­ri­ties shall take the neces­sa­ry steps to com­ply with the requi­re­ments and obli­ga­ti­ons of this Regu­la­ti­on by …[ six years from the date of ent­ry into force of this Regulation].

3. Pro­vi­ders of gene­ral-pur­po­se AI models that have been pla­ced on the mar­ket before …

[12 months from the date of ent­ry into force of this Regu­la­ti­on] shall take the neces­sa­ry steps in order to com­ply with the obli­ga­ti­ons laid down in this Regu­la­ti­on by … [36 months from the date of ent­ry into force of this Regulation].

Artic­le 112 Eva­lua­ti­on and review

1. The Com­mis­si­on shall assess the need for amend­ment of the list set out in Annex III and of the list of pro­hi­bi­ted AI prac­ti­ces laid down in Artic­le 5, once a year fol­lo­wing the ent­ry into force of this Regu­la­ti­on, and until the end of the peri­od of the dele­ga­ti­on of power laid down in Artic­le 97. The Com­mis­si­on shall sub­mit the fin­dings of that assess­ment to the Euro­pean Par­lia­ment and the Council.

2. By … [four years from the date of ent­ry into force of this Regu­la­ti­on] and every four years the­re­af­ter, the Com­mis­si­on shall eva­lua­te and report to the Euro­pean Par­lia­ment and to the Coun­cil on the following:

(a) the need for amend­ments exten­ding exi­sting area hea­dings or adding new area hea­dings in Annex III;

(b) amend­ments to the list of AI systems requi­ring addi­tio­nal trans­pa­ren­cy mea­su­res in Artic­le 50;

(c) amend­ments enhan­cing the effec­ti­ve­ness of the super­vi­si­on and gover­nan­ce system.

3. By … [five years from the date of ent­ry into force of this Regu­la­ti­on] and every four years

the­re­af­ter, the Com­mis­si­on shall sub­mit a report on the eva­lua­ti­on and review of this Regu­la­ti­on to the Euro­pean Par­lia­ment and to the Coun­cil. The report shall include an assess­ment with regard to the struc­tu­re of enforce­ment and the pos­si­ble need for a Uni­on agen­cy to resol­ve any iden­ti­fi­ed short­co­mings. On the basis of the fin­dings, that report shall, whe­re appro­pria­te, be accom­pa­nied by a pro­po­sal for amend­ment of this Regu­la­ti­on. The reports shall be made public.

4. The reports refer­red to in para­graph 2 shall pay spe­ci­fic atten­ti­on to the following:

(a) the sta­tus of the finan­cial, tech­ni­cal and human resour­ces of the natio­nal com­pe­tent aut­ho­ri­ties in order to effec­tively per­form the tasks assi­gned to them under this Regulation;

(b) the sta­te of pen­al­ties, in par­ti­cu­lar admi­ni­stra­ti­ve fines as refer­red to in Artic­le 99(1), applied by Mem­ber Sta­tes for inf­rin­ge­ments of this Regulation;

(c) adopted har­mo­ni­s­ed stan­dards and com­mon spe­ci­fi­ca­ti­ons deve­lo­ped to sup­port this Regulation;

(d) the num­ber of under­ta­kings that enter the mar­ket after the ent­ry into appli­ca­ti­on of this Regu­la­ti­on, and how many of them are SMEs.

5. By … [four years from the date of ent­ry into force of this Regu­la­ti­on)], the Com­mis­si­on shall eva­lua­te the func­tio­ning of the AI Office, whe­ther the AI Office has been given suf­fi­ci­ent powers and com­pe­ten­ces to ful­fil its tasks, and whe­ther it would be rele­vant and nee­ded for the pro­per imple­men­ta­ti­on and enforce­ment of this Regu­la­ti­on to upgrade the AI Office and its enforce­ment com­pe­ten­ces and to increa­se its resour­ces. The Com­mis­si­on shall sub­mit a report on its eva­lua­ti­on to the Euro­pean Par­lia­ment and to the Council.

6. By … [four years from the date of ent­ry into force of this Regu­la­ti­on)] and every four years the­re­af­ter, the Com­mis­si­on shall sub­mit a report on the review of the pro­gress on the deve­lo­p­ment of stan­dar­di­sati­on deli­ver­a­bles on the ener­gy-effi­ci­ent deve­lo­p­ment of gene­ral-pur­po­se AI models, and asses the need for fur­ther mea­su­res or actions, inclu­ding bin­ding mea­su­res or actions. The report shall be sub­mit­ted to the Euro­pean Par­lia­ment and to the Coun­cil, and it shall be made public.

7. By … [four years from the date of ent­ry into force of this Regu­la­ti­on] and every three years the­re­af­ter, the Com­mis­si­on shall eva­lua­te the impact and effec­ti­ve­ness of vol­un­t­a­ry codes of con­duct to foster the appli­ca­ti­on of the requi­re­ments set out in Chap­ter III, Sec­tion 2 for AI systems other than high-risk AI systems and pos­si­bly other addi­tio­nal requi­re­ments for AI systems other than high-risk AI systems, inclu­ding as regards envi­ron­men­tal sustainability.

8. For the pur­po­ses of para­graphs 1 to 7, the Board, the Mem­ber Sta­tes and natio­nal com­pe­tent aut­ho­ri­ties shall pro­vi­de the Com­mis­si­on with infor­ma­ti­on upon its request and wit­hout undue delay.

9. In car­ry­ing out the eva­lua­tions and reviews refer­red to in para­graphs 1 to 7, the Com­mis­si­on shall take into account the posi­ti­ons and fin­dings of the Board, of the Euro­pean Par­lia­ment, of the Coun­cil, and of other rele­vant bodies or sources.

10. The Com­mis­si­on shall, if neces­sa­ry, sub­mit appro­pria­te pro­po­sals to amend this Regu­la­ti­on, in par­ti­cu­lar taking into account deve­lo­p­ments in tech­no­lo­gy, the effect of AI systems on health and safe­ty, and on fun­da­men­tal rights, and in light of the sta­te of pro­gress in the infor­ma­ti­on society.

To gui­de the eva­lua­tions and reviews refer­red to in para­graphs 1 to 7 of this Artic­le, the AI Office shall under­ta­ke to deve­lop an objec­ti­ve and par­ti­ci­pa­ti­ve metho­do­lo­gy for the eva­lua­ti­on of risk levels based on the cri­te­ria out­lined in the rele­vant Artic­les and the inclu­si­on of new systems in:

(a) the list set out in Annex III, inclu­ding the exten­si­on of exi­sting area hea­dings or the addi­ti­on of new area hea­dings in that Annex;

(b) the list of pro­hi­bi­ted prac­ti­ces set out in Artic­le 5; and

(c) the list of AI systems requi­ring addi­tio­nal trans­pa­ren­cy mea­su­res pur­su­ant to Artic­le 50.

12. Any amend­ment to this Regu­la­ti­on pur­su­ant to para­graph 10, or rele­vant dele­ga­ted or imple­men­ting acts, which con­cerns sec­to­ral Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Sec­tion B of Annex I shall take into account the regu­la­to­ry spe­ci­fi­ci­ties of each sec­tor, and the exi­sting gover­nan­ce, con­for­mi­ty assess­ment and enforce­ment mecha­nisms and aut­ho­ri­ties estab­lished therein.

13. By … [seven years from the date of ent­ry into force of this Regu­la­ti­on], the Com­mis­si­on shall car­ry out an assess­ment of the enforce­ment of this Regu­la­ti­on and shall report on it to the Euro­pean Par­lia­ment, the Coun­cil and the Euro­pean Eco­no­mic and Social Com­mit­tee, taking into account the first years of appli­ca­ti­on of this Regu­la­ti­on. On the basis of the fin­dings, that report shall, whe­re appro­pria­te, be accom­pa­nied by a pro­po­sal for amend­ment of this Regu­la­ti­on with regard to the struc­tu­re of enforce­ment and the need for a Uni­on agen­cy to resol­ve any iden­ti­fi­ed shortcomings.

(174) Given the rapid tech­no­lo­gi­cal deve­lo­p­ments and the tech­ni­cal exper­ti­se requi­red to effec­tively app­ly this Regu­la­ti­on, the Com­mis­si­on should eva­lua­te and review this Regu­la­ti­on by … [five years from the date of ent­ry into force of this Regu­la­ti­on] and every four years the­re­af­ter and report to the Euro­pean Par­lia­ment and the Coun­cil. In addi­ti­on, taking into account the impli­ca­ti­ons for the scope of this Regu­la­ti­on, the Com­mis­si­on should car­ry out an assess­ment of the need to amend the list of high-risk AI systems and the list of pro­hi­bi­ted prac­ti­ces once a year. Moreo­ver, by … [four years from the date of ent­ry into force of this Regu­la­ti­on] and every four years the­re­af­ter, the Com­mis­si­on should eva­lua­te and report to the Euro­pean Par­lia­ment and to the Coun­cil on the need to amend the list of high-risk are­as hea­dings in the annex to this Regu­la­ti­on, the AI systems within the scope of the trans­pa­ren­cy obli­ga­ti­ons, the effec­ti­ve­ness of the super­vi­si­on and gover­nan­ce system and the pro­gress on the deve­lo­p­ment of stan­dar­di­sati­on deli­ver­a­bles on ener­gy effi­ci­ent deve­lo­p­ment of gene­ral-pur­po­se AI models, inclu­ding the need for fur­ther mea­su­res or actions. Final­ly, by … [four years from the date of ent­ry into force of this Regu­la­ti­on] and every three years the­re­af­ter, the Com­mis­si­on should eva­lua­te the impact and effec­ti­ve­ness of vol­un­t­a­ry codes of con­duct to foster the appli­ca­ti­on of the requi­re­ments pro­vi­ded for high-risk AI systems in the case of AI systems other than high-risk AI systems and pos­si­bly other addi­tio­nal requi­re­ments for such AI systems. 

Artic­le 113 Ent­ry into force and application

This Regu­la­ti­on shall enter into force on the twen­tieth day fol­lo­wing that of its publi­ca­ti­on in the Offi­ci­al Jour­nal of the Euro­pean Union.

It shall app­ly from … [24 months from the date of ent­ry into force of this Regulation].

Howe­ver:

(a) Chap­ters I and II shall app­ly from … [six months from the date of ent­ry into force of this Regulation]; 

(b) Chap­ter III Sec­tion 4, Chap­ter V, Chap­ter VII and Chap­ter XII and Artic­le 78 shall app­ly from … [12 months from the date of ent­ry into force of this Regu­la­ti­on], with the excep­ti­on of Artic­le 101;

(c) Artic­le 6(1) and the cor­re­spon­ding obli­ga­ti­ons in this Regu­la­ti­on shall app­ly from … [36 months from the date of ent­ry into force of this Regulation].

This Regu­la­ti­on shall be bin­ding in its enti­re­ty and direct­ly appli­ca­ble in all Mem­ber States.

(177) In order to ensu­re legal cer­tain­ty, ensu­re an appro­pria­te adap­t­ati­on peri­od for ope­ra­tors and avo­id dis­rup­ti­on to the mar­ket, inclu­ding by ensu­ring con­ti­nui­ty of the use of AI systems, it is appro­pria­te that this Regu­la­ti­on applies to the high-risk AI systems that have been pla­ced on the mar­ket or put into ser­vice befo­re the gene­ral date of appli­ca­ti­on the­reof, only if, from that date, tho­se systems are sub­ject to signi­fi­cant chan­ges in their design or inten­ded pur­po­se. It is appro­pria­te to cla­ri­fy that, in this respect, the con­cept of signi­fi­cant chan­ge should be under­s­tood as equi­va­lent in sub­stance to the noti­on of sub­stan­ti­al modi­fi­ca­ti­on, which is used with regard only to high-risk AI systems pur­su­ant to this Regu­la­ti­on. On an excep­tio­nal basis and in light of public accoun­ta­bi­li­ty, ope­ra­tors of AI systems which are com­pon­ents of the lar­ge-sca­le IT systems estab­lished by the legal acts listed in an annex to this Regu­la­ti­on and ope­ra­tors of high-risk AI systems that are inten­ded to be used by public aut­ho­ri­ties should, respec­tively, take the neces­sa­ry steps to com­ply with the requi­re­ments of this Regu­la­ti­on by end of 2030 and by … [ six years from the date of ent­ry into force of this Regulation].

(178) Pro­vi­ders of high-risk AI systems are encou­ra­ged to start to com­ply, on a vol­un­t­a­ry basis, with the rele­vant obli­ga­ti­ons of this Regu­la­ti­on alre­a­dy during the tran­si­tio­nal period.

(179) This Regu­la­ti­on should app­ly from … [two years from the date of ent­ry into force of this Regu­la­ti­on]. Howe­ver, taking into account the unac­cep­ta­ble risk asso­cia­ted with the use of AI in cer­tain ways, the pro­hi­bi­ti­ons as well as the gene­ral pro­vi­si­ons of this Regu­la­ti­on should alre­a­dy app­ly from … [six months from the date of ent­ry into force of this Regu­la­ti­on]. While the full effect of tho­se pro­hi­bi­ti­ons fol­lows with the estab­lish­ment of the gover­nan­ce and enforce­ment of this Regu­la­ti­on, anti­ci­pa­ting the appli­ca­ti­on of the pro­hi­bi­ti­ons is important to take account of unac­cep­ta­ble risks and to have an effect on other pro­ce­du­res, such as in civil law. Moreo­ver, the infras­truc­tu­re rela­ted to the gover­nan­ce and the con­for­mi­ty assess­ment system should be ope­ra­tio­nal befo­re… [two years from the date of ent­ry into force of this Regu­la­ti­on], the­r­e­fo­re the pro­vi­si­ons on noti­fi­ed bodies and gover­nan­ce struc­tu­re should app­ly from … [ 12 months from the date of ent­ry into force of this Regu­la­ti­on]. Given the rapid pace of tech­no­lo­gi­cal advance­ments and adop­ti­on of gene­ral-pur­po­se AI models, obli­ga­ti­ons for pro­vi­ders of gene­ral-pur­po­se AI models should app­ly from … [12 months from the date of ent­ry into force of this Regu­la­ti­on]. Codes of prac­ti­ce should be rea­dy by… [9 months from the date of ent­ry into force of this Regu­la­ti­on] in view of enab­ling pro­vi­ders to demon­stra­te com­pli­ance on time. The AI Office should ensu­re that clas­si­fi­ca­ti­on rules and pro­ce­du­res are up to date in light of tech­no­lo­gi­cal deve­lo­p­ments. In addi­ti­on, Mem­ber Sta­tes should lay down and noti­fy to the Com­mis­si­on the rules on pen­al­ties, inclu­ding admi­ni­stra­ti­ve fines, and ensu­re that they are pro­per­ly and effec­tively imple­men­ted by the date of appli­ca­ti­on of this Regu­la­ti­on. The­r­e­fo­re the pro­vi­si­ons on pen­al­ties should app­ly from … [12 months from the date of ent­ry into force of this Regulation].

ANNEX I List of Uni­on har­mo­ni­sa­ti­on legislation

Sec­tion A. List of Uni­on har­mo­ni­sa­ti­on legis­la­ti­on based on the New Legis­la­ti­ve Framework

1. Direc­ti­ve 2006/42/EC of the Euro­pean Par­lia­ment and of the Coun­cil of 17 May 2006 on machi­nery, and amen­ding Direc­ti­ve 95/16/EC (OJ L 157, 9.6.2006, p. 24) [as repea­led by the Machi­nery Regulation];

2. Direc­ti­ve 2009/48/EC of the Euro­pean Par­lia­ment and of the Coun­cil of 18 June 2009 on the safe­ty of toys (OJ L 170, 30.6.2009, p. 1);

3. Direc­ti­ve 2013/53/EU of the Euro­pean Par­lia­ment and of the Coun­cil of 20 Novem­ber 2013 on recrea­tio­nal craft and per­so­nal water­craft and repe­al­ing Direc­ti­ve 94/25/EC (OJ L 354, 28.12.2013, p. 90);

4. Direc­ti­ve 2014/33/EU of the Euro­pean Par­lia­ment and of the Coun­cil of 26 Febru­ary 2014 on the har­mo­ni­sa­ti­on of the laws of the Mem­ber Sta­tes rela­ting to lifts and safe­ty com­pon­ents for lifts (OJ L 96, 29.3.2014, p. 251);

5. Direc­ti­ve 2014/34/EU of the Euro­pean Par­lia­ment and of the Coun­cil of 26 Febru­ary 2014 on the har­mo­ni­sa­ti­on of the laws of the Mem­ber Sta­tes rela­ting to equip­ment and pro­tec­ti­ve systems inten­ded for use in poten­ti­al­ly explo­si­ve atmo­sphe­res (OJ L 96, 29.3.2014, p. 309);

6. Direc­ti­ve 2014/53/EU of the Euro­pean Par­lia­ment and of the Coun­cil of 16 April 2014 on the har­mo­ni­sa­ti­on of the laws of the Mem­ber Sta­tes rela­ting to the making available on the mar­ket of radio equip­ment and repe­al­ing Direc­ti­ve 1999/5/EC (OJ L 153, 22.5.2014, p. 62);

7. Direc­ti­ve 2014/68/EU of the Euro­pean Par­lia­ment and of the Coun­cil of 15 May 2014 on the har­mo­ni­sa­ti­on of the laws of the Mem­ber Sta­tes rela­ting to the making available on the mar­ket of pres­su­re equip­ment (OJ L 189, 27.6.2014, p. 164);

8. Regu­la­ti­on (EU) 2016/424 of the Euro­pean Par­lia­ment and of the Coun­cil of 9 March 2016 on cable­way instal­la­ti­ons and repe­al­ing Direc­ti­ve 2000/9/EC (OJ L 81, 31.3.2016, p. 1);

9. Regu­la­ti­on (EU) 2016/425 of the Euro­pean Par­lia­ment and of the Coun­cil of 9 March 2016 on per­so­nal pro­tec­ti­ve equip­ment and repe­al­ing Coun­cil Direc­ti­ve 89/686/EEC (OJ L 81, 31.3.2016, p. 51);

10. Regu­la­ti­on (EU) 2016/426 of the Euro­pean Par­lia­ment and of the Coun­cil of 9 March 2016 on appli­ances bur­ning gas­eous fuels and repe­al­ing Direc­ti­ve 2009/142/EC (OJ L 81, 31.3.2016, p. 99);

11. Regu­la­ti­on (EU) 2017/745 of the Euro­pean Par­lia­ment and of the Coun­cil of 5 April 2017 on medi­cal devices, amen­ding Direc­ti­ve 2001/83/EC, Regu­la­ti­on (EC) No 178/2002 and Regu­la­ti­on (EC) No 1223/2009 and repe­al­ing Coun­cil Direc­ti­ves 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1);

12. Regu­la­ti­on (EU) 2017/746 of the Euro­pean Par­lia­ment and of the Coun­cil of 5 April 2017 on in vitro dia­gno­stic medi­cal devices and repe­al­ing Direc­ti­ve 98/79/EC and Com­mis­si­on Decis­i­on 2010/227/EU (OJ L 117, 5.5.2017, p. 176).

Sec­tion B. List of other Uni­on har­mo­ni­sa­ti­on legislation

13. Regu­la­ti­on (EC) No 300/2008 of the Euro­pean Par­lia­ment and of the Coun­cil of

11 March 2008 on com­mon rules in the field of civil avia­ti­on secu­ri­ty and repe­al­ing Regu­la­ti­on (EC) No 2320/2002 (OJ L 97, 9.4.2008, p. 72);

14. Regu­la­ti­on (EU) No 168/2013 of the Euro­pean Par­lia­ment and of the Coun­cil of 15 Janu­ary 2013 on the appr­oval and mar­ket sur­veil­lan­ce of two- or three-wheel vehic­les and quad­ri­cy­cles (OJ L 60, 2.3.2013, p. 52);

15. Regu­la­ti­on (EU) No 167/2013 of the Euro­pean Par­lia­ment and of the Coun­cil of 5 Febru­ary 2013 on the appr­oval and mar­ket sur­veil­lan­ce of agri­cul­tu­ral and fore­stry vehic­les (OJ L 60, 2.3.2013, p. 1);

16. Direc­ti­ve 2014/90/EU of the Euro­pean Par­lia­ment and of the Coun­cil of 23 July 2014 on mari­ne equip­ment and repe­al­ing Coun­cil Direc­ti­ve 96/98/EC (OJ L 257, 28.8.2014, p. 146);

17. Direc­ti­ve (EU) 2016/797 of the Euro­pean Par­lia­ment and of the Coun­cil of 11 May 2016 on the inter­ope­ra­bi­li­ty of the rail system within the Euro­pean Uni­on (OJ L 138, 26.5.2016, p. 44);

18. Regu­la­ti­on (EU) 2018/858 of the Euro­pean Par­lia­ment and of the Coun­cil of 30 May 2018 on the appr­oval and mar­ket sur­veil­lan­ce of motor vehic­les and their trai­lers, and of systems, com­pon­ents and sepa­ra­te tech­ni­cal units inten­ded for such vehic­les, amen­ding Regu­la­ti­ons (EC) No 715/2007 and (EC) No 595/2009 and repe­al­ing Direc­ti­ve 2007/46/EC (OJ L 151, 14.6.2018, p. 1);

19. Regu­la­ti­on (EU) 2019/2144 of the Euro­pean Par­lia­ment and of the Coun­cil of 27 Novem­ber 2019 on type-appr­oval requi­re­ments for motor vehic­les and their trai­lers, and systems, com­pon­ents and sepa­ra­te tech­ni­cal units inten­ded for such vehic­les, as regards their gene­ral safe­ty and the pro­tec­tion of vehic­le occu­pants and vul­nerable road users, amen­ding Regu­la­ti­on (EU) 2018/858 of the Euro­pean Par­lia­ment and of the Coun­cil and repe­al­ing Regu­la­ti­ons (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of the Euro­pean Par­lia­ment and of the Coun­cil and Com­mis­si­on Regu­la­ti­ons (EC) No 631/2009, (EU) No 406/2010, (EU) No 672/2010, (EU) No 1003/2010, (EU) No 1005/2010, (EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU) No 109/2011, (EU) No 458/2011, (EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) No 1230/2012 and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1);

20. Regu­la­ti­on (EU) 2018/1139 of the Euro­pean Par­lia­ment and of the Coun­cil of 4 July 2018 on com­mon rules in the field of civil avia­ti­on and estab­li­shing a Euro­pean Uni­on Avia­ti­on Safe­ty Agen­cy, and amen­ding Regu­la­ti­ons (EC) No 2111/2005, (EC) No 1008/2008, (EU) No 996/2010, (EU) No 376/2014 and Direc­ti­ves 2014/30/EU and 2014/53/EU of the Euro­pean Par­lia­ment and of the Coun­cil, and repe­al­ing Regu­la­ti­ons (EC) No 552/2004 and (EC) No 216/2008 of the Euro­pean Par­lia­ment and of the Coun­cil and Coun­cil Regu­la­ti­on (EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1), in so far as the design, pro­duc­tion and pla­cing on the mar­ket of air­crafts refer­red to in Artic­le 2(1), points (a) and (b) the­reof, whe­re it con­cerns unman­ned air­craft and their engi­nes, pro­pel­lers, parts and equip­ment to con­trol them remo­te­ly, are concerned.

ANNEX II List of cri­mi­nal offen­ces refer­red to in Artic­le 5(1), first sub­pa­ra­graph, point (h)(iii)

Cri­mi­nal offen­ces refer­red to in Artic­le 5(1), first sub­pa­ra­graph, point (h)(iii):

  • ter­ro­rism,
  • traf­ficking in human beings,
  • sexu­al explo­ita­ti­on of child­ren, and child pornography,
  • illi­cit traf­ficking in nar­co­tic drugs or psy­cho­tro­pic substances,
  • illi­cit traf­ficking in wea­pons, muni­ti­ons or explosives,
  • mur­der, grie­vous bodi­ly injury,
  • illi­cit trade in human organs or tissue,
  • illi­cit traf­ficking in nuclear or radio­ac­ti­ve materials,
  • kid­nap­ping, ille­gal restraint or hostage-taking, 
  • cri­mes within the juris­dic­tion of the Inter­na­tio­nal Cri­mi­nal Court,
  • unlawful sei­zu­re of air­craft or ships,
  • rape,
  • envi­ron­men­tal crime,
  • orga­ni­s­ed or armed robbery,
  • sabo­ta­ge,
  • par­ti­ci­pa­ti­on in a cri­mi­nal orga­ni­sa­ti­on invol­ved in one or more of the offen­ces listed above. 

ANNEX III High-risk AI systems refer­red to in Artic­le 6(2)

High-risk AI systems pur­su­ant to Artic­le 6(2) are the AI systems listed in any of the fol­lo­wing areas:

1. Bio­me­trics, in so far as their use is per­mit­ted under rele­vant Uni­on or natio­nal law:

(a) remo­te bio­me­tric iden­ti­fi­ca­ti­on systems.

This shall not include AI systems inten­ded to be used for bio­me­tric veri­fi­ca­ti­on the sole pur­po­se of which is to con­firm that a spe­ci­fic natu­ral per­son is the per­son he or she claims to be;

(54) As bio­me­tric data con­sti­tu­tes a spe­cial cate­go­ry of per­so­nal data, it is appro­pria­te to clas­si­fy as high-risk seve­ral cri­ti­cal-use cases of bio­me­tric systems, inso­far as their use is per­mit­ted under rele­vant Uni­on and natio­nal law. Tech­ni­cal inac­cu­ra­ci­es of AI systems inten­ded for the remo­te bio­me­tric iden­ti­fi­ca­ti­on of natu­ral per­sons can lead to bia­sed results and ent­ail dis­cri­mi­na­to­ry effects. The risk of such bia­sed results and dis­cri­mi­na­to­ry effects is par­ti­cu­lar­ly rele­vant with regard to age, eth­ni­ci­ty, race, sex or disa­bi­li­ties. Remo­te bio­me­tric iden­ti­fi­ca­ti­on systems should the­r­e­fo­re be clas­si­fi­ed as high-risk in view of the risks that they pose. Such a clas­si­fi­ca­ti­on exclu­des AI systems inten­ded to be used for bio­me­tric veri­fi­ca­ti­on, inclu­ding authen­ti­ca­ti­on, the sole pur­po­se of which is to con­firm that a spe­ci­fic natu­ral per­son is who that per­son claims to be and to con­firm the iden­ti­ty of a natu­ral per­son for the sole pur­po­se of having access to a ser­vice, unlocking a device or having secu­re access to pre­mi­ses. In addi­ti­on, AI systems inten­ded to be used for bio­me­tric cate­go­ri­sa­ti­on accor­ding to sen­si­ti­ve attri­bu­tes or cha­rac­te­ri­stics pro­tec­ted under Artic­le 9(1) of Regu­la­ti­on (EU) 2016/679 on the basis of bio­me­tric data, in so far as the­se are not pro­hi­bi­ted under this Regu­la­ti­on, and emo­ti­on reco­gni­ti­on systems that are not pro­hi­bi­ted under this Regu­la­ti­on, should be clas­si­fi­ed as high-risk. Bio­me­tric systems which are inten­ded to be used sole­ly for the pur­po­se of enab­ling cyber­se­cu­ri­ty and per­so­nal data pro­tec­tion mea­su­res should not be con­side­red to be high-risk AI systems.

(b) AI systems inten­ded to be used for bio­me­tric cate­go­ri­sa­ti­on, accor­ding to sen­si­ti­ve or pro­tec­ted attri­bu­tes or cha­rac­te­ri­stics based on the infe­rence of tho­se attri­bu­tes or characteristics;

(c) AI systems inten­ded to be used for emo­ti­on recognition. 

2. Cri­ti­cal infras­truc­tu­re: AI systems inten­ded to be used as safe­ty com­pon­ents in the manage­ment and ope­ra­ti­on of cri­ti­cal digi­tal infras­truc­tu­re, road traf­fic, or in the sup­p­ly of water, gas, hea­ting or electricity.

(55) As regards the manage­ment and ope­ra­ti­on of cri­ti­cal infras­truc­tu­re, it is appro­pria­te to clas­si­fy as high-risk the AI systems inten­ded to be used as safe­ty com­pon­ents in the manage­ment and ope­ra­ti­on of cri­ti­cal digi­tal infras­truc­tu­re as listed in point (8) of the Annex to Direc­ti­ve (EU) 2022/2557, road traf­fic and the sup­p­ly of water, gas, hea­ting and elec­tri­ci­ty, sin­ce their fail­ure or mal­func­tio­ning may put at risk the life and health of per­sons at lar­ge sca­le and lead to app­re­cia­ble dis­rup­ti­ons in the ordi­na­ry con­duct of social and eco­no­mic acti­vi­ties. Safe­ty com­pon­ents of cri­ti­cal infras­truc­tu­re, inclu­ding cri­ti­cal digi­tal infras­truc­tu­re, are systems used to direct­ly pro­tect the phy­si­cal inte­gri­ty of cri­ti­cal infras­truc­tu­re or the health and safe­ty of per­sons and pro­per­ty but which are not neces­sa­ry in order for the system to func­tion. The fail­ure or mal­func­tio­ning of such com­pon­ents might direct­ly lead to risks to the phy­si­cal inte­gri­ty of cri­ti­cal infras­truc­tu­re and thus to risks to health and safe­ty of per­sons and pro­per­ty. Com­pon­ents inten­ded to be used sole­ly for cyber­se­cu­ri­ty pur­po­ses should not qua­li­fy as safe­ty com­pon­ents. Examp­les of safe­ty com­pon­ents of such cri­ti­cal infras­truc­tu­re may include systems for moni­to­ring water pres­su­re or fire alarm con­trol­ling systems in cloud com­pu­ting centres. 

3. Edu­ca­ti­on and voca­tio­nal training:

(a) AI systems inten­ded to be used to deter­mi­ne access or admis­si­on or to assign natu­ral per­sons to edu­ca­tio­nal and voca­tio­nal trai­ning insti­tu­ti­ons at all levels;

(b) AI systems inten­ded to be used to eva­lua­te lear­ning out­co­mes, inclu­ding when tho­se out­co­mes are used to steer the lear­ning pro­cess of natu­ral per­sons in edu­ca­tio­nal and voca­tio­nal trai­ning insti­tu­ti­ons at all levels;

(c) AI systems inten­ded to be used for the pur­po­se of asses­sing the appro­pria­te level of edu­ca­ti­on that an indi­vi­du­al will recei­ve or will be able to access, in the con­text of or within edu­ca­tio­nal and voca­tio­nal trai­ning insti­tu­ti­ons at all levels;

(d) AI systems inten­ded to be used for moni­to­ring and detec­ting pro­hi­bi­ted beha­viour of stu­dents during tests in the con­text of or within edu­ca­tio­nal and voca­tio­nal trai­ning insti­tu­ti­ons at all levels.

(56) The deployment of AI systems in edu­ca­ti­on is important to pro­mo­te high-qua­li­ty digi­tal edu­ca­ti­on and trai­ning and to allow all lear­ners and tea­chers to acqui­re and share the neces­sa­ry digi­tal skills and com­pe­ten­ces, inclu­ding media liter­a­cy, and cri­ti­cal thin­king, to take an acti­ve part in the eco­no­my, socie­ty, and in demo­cra­tic pro­ce­s­ses. Howe­ver, AI systems used in edu­ca­ti­on or voca­tio­nal trai­ning, in par­ti­cu­lar for deter­mi­ning access or admis­si­on, for assig­ning per­sons to edu­ca­tio­nal and voca­tio­nal trai­ning insti­tu­ti­ons or pro­gram­mes at all levels, for eva­lua­ting lear­ning out­co­mes of per­sons, for asses­sing the appro­pria­te level of edu­ca­ti­on for an indi­vi­du­al and mate­ri­al­ly influen­cing the level of edu­ca­ti­on and trai­ning that indi­vi­du­als will recei­ve or will be able to access or for moni­to­ring and detec­ting pro­hi­bi­ted beha­viour of stu­dents during tests should be clas­si­fi­ed as high-risk AI systems, sin­ce they may deter­mi­ne the edu­ca­tio­nal and pro­fes­sio­nal cour­se of a person’s life and the­r­e­fo­re may affect that person’s abili­ty to secu­re a liveli­hood. When impro­per­ly desi­gned and used, such systems may be par­ti­cu­lar­ly intru­si­ve and may vio­la­te the right to edu­ca­ti­on and trai­ning as well as the right not to be dis­cri­mi­na­ted against and per­pe­tua­te histo­ri­cal pat­terns of dis­cri­mi­na­ti­on, for exam­p­le against women, cer­tain age groups, per­sons with disa­bi­li­ties, or per­sons of cer­tain racial or eth­nic ori­g­ins or sexu­al orientation.

4. Employment, workers manage­ment and access to self-employment:

(a) AI systems inten­ded to be used for the recruit­ment or sel­ec­tion of natu­ral per­sons, in par­ti­cu­lar to place tar­ge­ted job adver­ti­se­ments, to ana­ly­se and fil­ter job appli­ca­ti­ons, and to eva­lua­te candidates;

(b) AI systems inten­ded to be used to make decis­i­ons affec­ting terms of work-rela­ted rela­ti­on­ships, the pro­mo­ti­on or ter­mi­na­ti­on of work-rela­ted con­trac­tu­al rela­ti­on­ships, to allo­ca­te tasks based on indi­vi­du­al beha­viour or per­so­nal traits or cha­rac­te­ri­stics or to moni­tor and eva­lua­te the per­for­mance and beha­viour of per­sons in such relationships.

(57) AI systems used in employment, workers manage­ment and access to self-employment, in par­ti­cu­lar for the recruit­ment and sel­ec­tion of per­sons, for making decis­i­ons affec­ting terms of the work-rela­ted rela­ti­on­ship, pro­mo­ti­on and ter­mi­na­ti­on of work-rela­ted con­trac­tu­al rela­ti­on­ships, for allo­ca­ting tasks on the basis of indi­vi­du­al beha­viour, per­so­nal traits or cha­rac­te­ri­stics and for moni­to­ring or eva­lua­ti­on of per­sons in work- rela­ted con­trac­tu­al rela­ti­on­ships, should also be clas­si­fi­ed as high-risk, sin­ce tho­se systems may have an app­re­cia­ble impact on future care­er pro­s­pects, liveli­hoods of tho­se per­sons and workers’ rights. Rele­vant work-rela­ted con­trac­tu­al rela­ti­on­ships should, in a meaningful man­ner, invol­ve employees and per­sons pro­vi­ding ser­vices through plat­forms as refer­red to in the Com­mis­si­on Work Pro­gram­me 2021. Throug­hout the recruit­ment pro­cess and in the eva­lua­ti­on, pro­mo­ti­on, or reten­ti­on of per­sons in work-rela­ted con­trac­tu­al rela­ti­on­ships, such systems may per­pe­tua­te histo­ri­cal pat­terns of dis­cri­mi­na­ti­on, for exam­p­le against women, cer­tain age groups, per­sons with disa­bi­li­ties, or per­sons of cer­tain racial or eth­nic ori­g­ins or sexu­al ori­en­ta­ti­on. AI systems used to moni­tor the per­for­mance and beha­viour of such per­sons may also under­mi­ne their fun­da­men­tal rights to data pro­tec­tion and privacy.

5. Access to and enjoy­ment of essen­ti­al pri­va­te ser­vices and essen­ti­al public ser­vices and benefits:

(a) AI systems inten­ded to be used by public aut­ho­ri­ties or on behalf of public aut­ho­ri­ties to eva­lua­te the eli­gi­bi­li­ty of natu­ral per­sons for essen­ti­al public assi­stance bene­fits and ser­vices, inclu­ding heal­th­ca­re ser­vices, as well as to grant, redu­ce, revo­ke, or recla­im such bene­fits and services;

(b) AI systems inten­ded to be used to eva­lua­te the cre­dit­wort­hi­ness of natu­ral per­sons or estab­lish their cre­dit score, with the excep­ti­on of AI systems used for the pur­po­se of detec­ting finan­cial fraud;

(c) AI systems inten­ded to be used for risk assess­ment and pri­cing in rela­ti­on to natu­ral per­sons in the case of life and health insurance;

(d) AI systems inten­ded to eva­lua­te and clas­si­fy emer­gen­cy calls by natu­ral per­sons or to be used to dis­patch, or to estab­lish prio­ri­ty in the dis­patching of, emer­gen­cy first respon­se ser­vices, inclu­ding by poli­ce, fire­figh­ters and medi­cal aid, as well as of emer­gen­cy heal­th­ca­re pati­ent tria­ge systems.

(58) Ano­ther area in which the use of AI systems deser­ves spe­cial con­side­ra­ti­on is the access to and enjoy­ment of cer­tain essen­ti­al pri­va­te and public ser­vices and bene­fits neces­sa­ry for peo­p­le to ful­ly par­ti­ci­pa­te in socie­ty or to impro­ve one’s stan­dard of living. In par­ti­cu­lar, natu­ral per­sons app­ly­ing for or recei­ving essen­ti­al public assi­stance bene­fits and ser­vices from public aut­ho­ri­ties name­ly heal­th­ca­re ser­vices, social secu­ri­ty bene­fits, social ser­vices pro­vi­ding pro­tec­tion in cases such as mater­ni­ty, ill­ness, indu­stri­al acci­dents, depen­den­cy or old age and loss of employment and social and housing assi­stance, are typi­cal­ly depen­dent on tho­se bene­fits and ser­vices and in a vul­nerable posi­ti­on in rela­ti­on to the respon­si­ble aut­ho­ri­ties. If AI systems are used for deter­mi­ning whe­ther such bene­fits and ser­vices should be gran­ted, denied, redu­ced, revo­ked or reclai­med by aut­ho­ri­ties, inclu­ding whe­ther bene­fi­ci­a­ries are legi­ti­m­ate­ly entit­led to such bene­fits or ser­vices, tho­se systems may have a signi­fi­cant impact on per­sons’ liveli­hood and may inf­rin­ge their fun­da­men­tal rights, such as the right to social pro­tec­tion, non-dis­cri­mi­na­ti­on, human dignity or an effec­ti­ve reme­dy and should the­r­e­fo­re be clas­si­fi­ed as high-risk. None­thel­ess, this Regu­la­ti­on should not ham­per the deve­lo­p­ment and use of inno­va­ti­ve approa­ches in the public admi­ni­stra­ti­on, which would stand to bene­fit from a wider use of com­pli­ant and safe AI systems, pro­vi­ded that tho­se systems do not ent­ail a high risk to legal and natu­ral persons.

In addi­ti­on, AI systems used to eva­lua­te the cre­dit score or cre­dit­wort­hi­ness of natu­ral per­sons should be clas­si­fi­ed as high-risk AI systems, sin­ce they deter­mi­ne tho­se per­sons’ access to finan­cial resour­ces or essen­ti­al ser­vices such as housing, elec­tri­ci­ty, and tele­com­mu­ni­ca­ti­on ser­vices. AI systems used for tho­se pur­po­ses may lead to dis­cri­mi­na­ti­on bet­ween per­sons or groups and may per­pe­tua­te histo­ri­cal pat­terns of dis­cri­mi­na­ti­on, such as that based on racial or eth­nic ori­g­ins, gen­der, disa­bi­li­ties, age or sexu­al ori­en­ta­ti­on, or may crea­te new forms of dis­cri­mi­na­to­ry impacts. Howe­ver, AI systems pro­vi­ded for by Uni­on law for the pur­po­se of detec­ting fraud in the offe­ring of finan­cial ser­vices and for pru­den­ti­al pur­po­ses to cal­cu­la­te cre­dit insti­tu­ti­ons’ and insu­rance under­ta­kings’ capi­tal requi­re­ments should not be con­side­red to be high-risk under this Regu­la­ti­on. Moreo­ver, AI systems inten­ded to be used for risk assess­ment and pri­cing in rela­ti­on to natu­ral per­sons for health and life insu­rance can also have a signi­fi­cant impact on per­sons’ liveli­hood and if not duly desi­gned, deve­lo­ped and used, can inf­rin­ge their fun­da­men­tal rights and can lead to serious con­se­quen­ces for people’s life and health, inclu­ding finan­cial exclu­si­on and dis­cri­mi­na­ti­on. Final­ly, AI systems used to eva­lua­te and clas­si­fy emer­gen­cy calls by natu­ral per­sons or to dis­patch or estab­lish prio­ri­ty in the dis­patching of emer­gen­cy first respon­se ser­vices, inclu­ding by poli­ce, fire­figh­ters and medi­cal aid, as well as of emer­gen­cy heal­th­ca­re pati­ent tria­ge systems, should also be clas­si­fi­ed as high-risk sin­ce they make decis­i­ons in very cri­ti­cal situa­tions for the life and health of per­sons and their property.

6. Law enforce­ment, in so far as their use is per­mit­ted under rele­vant Uni­on or natio­nal law:

(a) AI systems inten­ded to be used by or on behalf of law enforce­ment aut­ho­ri­ties, or by Uni­on insti­tu­ti­ons, bodies, offices or agen­ci­es in sup­port of law enforce­ment aut­ho­ri­ties or on their behalf to assess the risk of a natu­ral per­son beco­ming the vic­tim of cri­mi­nal offences;

(b) AI systems inten­ded to be used by or on behalf of law enforce­ment aut­ho­ri­ties or by Uni­on insti­tu­ti­ons, bodies, offices or agen­ci­es in sup­port of law enforce­ment aut­ho­ri­ties as poly­graphs or simi­lar tools;

(c) AI systems inten­ded to be used by or on behalf of law enforce­ment aut­ho­ri­ties, or by Uni­on insti­tu­ti­ons, bodies, offices or agen­ci­es, in sup­port of law enforce­ment aut­ho­ri­ties to eva­lua­te the relia­bi­li­ty of evi­dence in the cour­se of the inve­sti­ga­ti­on or pro­se­cu­ti­on of cri­mi­nal offences;

(d) AI systems inten­ded to be used by law enforce­ment aut­ho­ri­ties or on their behalf or by Uni­on insti­tu­ti­ons, bodies, offices or agen­ci­es in sup­port of law enforce­ment aut­ho­ri­ties for asses­sing the risk of a natu­ral per­son offen­ding or re-offen­ding not sole­ly on the basis of the pro­fil­ing of natu­ral per­sons as refer­red to in Artic­le 3(4) of Direc­ti­ve (EU) 2016/680, or to assess per­so­na­li­ty traits and cha­rac­te­ri­stics or past cri­mi­nal beha­viour of natu­ral per­sons or groups;

(e) AI systems inten­ded to be used by or on behalf of law enforce­ment aut­ho­ri­ties or by Uni­on insti­tu­ti­ons, bodies, offices or agen­ci­es in sup­port of law enforce­ment aut­ho­ri­ties for the pro­fil­ing of natu­ral per­sons as refer­red to in Artic­le 3(4) of Direc­ti­ve (EU) 2016/680 in the cour­se of the detec­tion, inve­sti­ga­ti­on or pro­se­cu­ti­on of cri­mi­nal offences.

(59) Given their role and respon­si­bi­li­ty, actions by law enforce­ment aut­ho­ri­ties invol­ving cer­tain uses of AI systems are cha­rac­te­ri­sed by a signi­fi­cant degree of power imba­lan­ce and may lead to sur­veil­lan­ce, arrest or depri­va­ti­on of a natu­ral person’s liber­ty as well as other adver­se impacts on fun­da­men­tal rights gua­ran­teed in the Char­ter. In par­ti­cu­lar, if the AI system is not trai­ned with high-qua­li­ty data, does not meet ade­qua­te requi­re­ments in terms of its per­for­mance, its accu­ra­cy or robust­ness, or is not pro­per­ly desi­gned and tested befo­re being put on the mar­ket or other­wi­se put into ser­vice, it may sin­gle out peo­p­le in a dis­cri­mi­na­to­ry or other­wi­se incor­rect or unjust man­ner. Fur­ther­mo­re, the exer­cise of important pro­ce­du­ral fun­da­men­tal rights, such as the right to an effec­ti­ve reme­dy and to a fair tri­al as well as the right of defence and the pre­sump­ti­on of inno­cence, could be ham­pe­red, in par­ti­cu­lar, whe­re such AI systems are not suf­fi­ci­ent­ly trans­pa­rent, explainable and docu­men­ted. It is the­r­e­fo­re appro­pria­te to clas­si­fy as high-risk, inso­far as their use is per­mit­ted under rele­vant Uni­on and natio­nal law, a num­ber of AI systems inten­ded to be used in the law enforce­ment con­text whe­re accu­ra­cy, relia­bi­li­ty and trans­pa­ren­cy is par­ti­cu­lar­ly important to avo­id adver­se impacts, retain public trust and ensu­re accoun­ta­bi­li­ty and effec­ti­ve redress.

In view of the natu­re of the acti­vi­ties and the risks rela­ting the­re­to, tho­se high-risk AI systems should include in par­ti­cu­lar AI systems inten­ded to be used by or on behalf of law enforce­ment aut­ho­ri­ties or by Uni­on insti­tu­ti­ons, bodies, offices, or agen­ci­es in sup­port of law enforce­ment aut­ho­ri­ties for asses­sing the risk of a natu­ral per­son to beco­me a vic­tim of cri­mi­nal offen­ces, as poly­graphs and simi­lar tools, for the eva­lua­ti­on of the relia­bi­li­ty of evi­dence in in the cour­se of inve­sti­ga­ti­on or pro­se­cu­ti­on of cri­mi­nal offen­ces, and, inso­far as not pro­hi­bi­ted under this Regu­la­ti­on, for asses­sing the risk of a natu­ral per­son offen­ding or reof­fen­ding not sole­ly on the basis of the pro­fil­ing of natu­ral per­sons or the assess­ment of per­so­na­li­ty traits and cha­rac­te­ri­stics or the past cri­mi­nal beha­viour of natu­ral per­sons or groups, for pro­fil­ing in the cour­se of detec­tion, inve­sti­ga­ti­on or pro­se­cu­ti­on of cri­mi­nal offen­ces . AI systems spe­ci­fi­cal­ly inten­ded to be used for admi­ni­stra­ti­ve pro­ce­e­dings by tax and cus­toms aut­ho­ri­ties as well as by finan­cial intel­li­gence units car­ry­ing out admi­ni­stra­ti­ve tasks ana­ly­sing infor­ma­ti­on pur­su­ant to Uni­on anti-money laun­de­ring law should not be clas­si­fi­ed as high-risk AI systems used by law enforce­ment aut­ho­ri­ties for the pur­po­se of pre­ven­ti­on, detec­tion, inve­sti­ga­ti­on and pro­se­cu­ti­on of cri­mi­nal offen­ces. The use of AI tools by law enforce­ment and other rele­vant aut­ho­ri­ties should not beco­me a fac­tor of ine­qua­li­ty, or exclu­si­on. The impact of the use of AI tools on the defence rights of suspects should not be igno­red, in par­ti­cu­lar the dif­fi­cul­ty in obtai­ning meaningful infor­ma­ti­on on the func­tio­ning of tho­se systems and the resul­ting dif­fi­cul­ty in chal­len­ging their results in court, in par­ti­cu­lar by natu­ral per­sons under investigation.

7. Migra­ti­on, asyl­um and bor­der con­trol manage­ment, in so far as their use is per­mit­ted under rele­vant Uni­on or natio­nal law:

(a) AI systems inten­ded to be used by or on behalf of com­pe­tent public aut­ho­ri­ties or by Uni­on insti­tu­ti­ons, bodies, offices or agen­ci­es as poly­graphs or simi­lar tools;

(b) AI systems inten­ded to be used by or on behalf of com­pe­tent public aut­ho­ri­ties or by Uni­on insti­tu­ti­ons, bodies, offices or agen­ci­es to assess a risk, inclu­ding a secu­ri­ty risk, a risk of irre­gu­lar migra­ti­on, or a health risk, posed by a natu­ral per­son who intends to enter or who has ente­red into the ter­ri­to­ry of a Mem­ber State;

(c) AI systems inten­ded to be used by or on behalf of com­pe­tent public aut­ho­ri­ties or by Uni­on insti­tu­ti­ons, bodies, offices or agen­ci­es to assist com­pe­tent public aut­ho­ri­ties for the exami­na­ti­on of appli­ca­ti­ons for asyl­um, visa or resi­dence per­mits and for asso­cia­ted com­plaints with regard to the eli­gi­bi­li­ty of the natu­ral per­sons app­ly­ing for a sta­tus, inclu­ding rela­ted assess­ments of the relia­bi­li­ty of evidence;

(d) AI systems inten­ded to be used by or on behalf of com­pe­tent public aut­ho­ri­ties, or by Uni­on insti­tu­ti­ons, bodies, offices or agen­ci­es, in the con­text of migra­ti­on, asyl­um or bor­der con­trol manage­ment, for the pur­po­se of detec­ting, reco­g­nis­ing or iden­ti­fy­ing natu­ral per­sons, with the excep­ti­on of the veri­fi­ca­ti­on of tra­vel documents.

(60) AI systems used in migra­ti­on, asyl­um and bor­der con­trol manage­ment affect per­sons who are often in par­ti­cu­lar­ly vul­nerable posi­ti­on and who are depen­dent on the out­co­me of the actions of the com­pe­tent public aut­ho­ri­ties. The accu­ra­cy, non-dis­cri­mi­na­to­ry natu­re and trans­pa­ren­cy of the AI systems used in tho­se con­texts are the­r­e­fo­re par­ti­cu­lar­ly important to gua­ran­tee respect for the fun­da­men­tal rights of the affec­ted per­sons, in par­ti­cu­lar their rights to free move­ment, non-dis­cri­mi­na­ti­on, pro­tec­tion of pri­va­te life and per­so­nal data, inter­na­tio­nal pro­tec­tion and good admi­ni­stra­ti­on. It is the­r­e­fo­re appro­pria­te to clas­si­fy as high-risk, inso­far as their use is per­mit­ted under rele­vant Uni­on and natio­nal law, AI systems inten­ded to be used by or on behalf of com­pe­tent public aut­ho­ri­ties or by Uni­on insti­tu­ti­ons, bodies, offices or agen­ci­es char­ged with tasks in the fields of migra­ti­on, asyl­um and bor­der con­trol manage­ment as poly­graphs and simi­lar tools, for asses­sing cer­tain risks posed by natu­ral per­sons ente­ring the ter­ri­to­ry of a Mem­ber Sta­te or app­ly­ing for visa or asyl­um, for assi­sting com­pe­tent public aut­ho­ri­ties for the exami­na­ti­on, inclu­ding rela­ted assess­ment of the relia­bi­li­ty of evi­dence, of appli­ca­ti­ons for asyl­um, visa and resi­dence per­mits and asso­cia­ted com­plaints with regard to the objec­ti­ve to estab­lish the eli­gi­bi­li­ty of the natu­ral per­sons app­ly­ing for a sta­tus, for the pur­po­se of detec­ting, reco­g­nis­ing or iden­ti­fy­ing natu­ral per­sons in the con­text of migra­ti­on, asyl­um and bor­der con­trol manage­ment, with the excep­ti­on of veri­fi­ca­ti­on of tra­vel documents. 

AI systems in the area of migra­ti­on, asyl­um and bor­der con­trol manage­ment cover­ed by this Regu­la­ti­on should com­ply with the rele­vant pro­ce­du­ral requi­re­ments set by the Regu­la­ti­on (EC) No 810/2009 of the Euro­pean Par­lia­ment and of the Coun­cil , the Direc­ti­ve 2013/32/EU of the Euro­pean Par­lia­ment and of the Coun­cil , and other rele­vant Uni­on law. The use of AI systems in migra­ti­on, asyl­um and bor­der con­trol manage­ment should, in no cir­cum­stances, be used by Mem­ber Sta­tes or Uni­on insti­tu­ti­ons, bodies, offices or agen­ci­es as a means to cir­cum­vent their inter­na­tio­nal obli­ga­ti­ons under the UN Con­ven­ti­on rela­ting to the Sta­tus of Refu­gees done at Gen­e­va on 28 July 1951 as amen­ded by the Pro­to­col of 31 Janu­ary 1967. Nor should they be used to in any way inf­rin­ge on the prin­ci­ple of non-refou­le­ment, or to deny safe and effec­ti­ve legal ave­nues into the ter­ri­to­ry of the Uni­on, inclu­ding the right to inter­na­tio­nal protection.

8. Admi­ni­stra­ti­on of justi­ce and demo­cra­tic processes:

(a) AI systems inten­ded to be used by a judi­cial aut­ho­ri­ty or on their behalf to assist a judi­cial aut­ho­ri­ty in rese­ar­ching and inter­pre­ting facts and the law and in app­ly­ing the law to a con­cre­te set of facts, or to be used in a simi­lar way in alter­na­ti­ve dis­pu­te resolution;

(b) AI systems inten­ded to be used for influen­cing the out­co­me of an elec­tion or refe­ren­dum or the voting beha­viour of natu­ral per­sons in the exer­cise of their vote in elec­tions or refe­ren­da. This does not include AI systems to the out­put of which natu­ral per­sons are not direct­ly expo­sed, such as tools used to orga­ni­se, opti­mi­se or struc­tu­re poli­ti­cal cam­paigns from an admi­ni­stra­ti­ve or logi­sti­cal point of view.

(61) Cer­tain AI systems inten­ded for the admi­ni­stra­ti­on of justi­ce and demo­cra­tic pro­ce­s­ses should be clas­si­fi­ed as high-risk, con­side­ring their poten­ti­al­ly signi­fi­cant impact on demo­cra­cy, the rule of law, indi­vi­du­al free­doms as well as the right to an effec­ti­ve reme­dy and to a fair tri­al. In par­ti­cu­lar, to address the risks of poten­ti­al bia­ses, errors and opa­ci­ty, it is appro­pria­te to qua­li­fy as high-risk AI systems inten­ded to be used by a judi­cial aut­ho­ri­ty or on its behalf to assist judi­cial aut­ho­ri­ties in rese­ar­ching and inter­pre­ting facts and the law and in app­ly­ing the law to a con­cre­te set of facts. AI systems inten­ded to be used by alter­na­ti­ve dis­pu­te reso­lu­ti­on bodies for tho­se pur­po­ses should also be con­side­red to be high-risk when the out­co­mes of the alter­na­ti­ve dis­pu­te reso­lu­ti­on pro­ce­e­dings pro­du­ce legal effects for the par­ties. The use of AI tools can sup­port the decis­i­on-making power of jud­ges or judi­cial inde­pen­dence, but should not replace it: the final decis­i­on-making must remain a human-dri­ven acti­vi­ty. The clas­si­fi­ca­ti­on of AI systems as high-risk should not, howe­ver, extend to AI systems inten­ded for purely ancil­la­ry admi­ni­stra­ti­ve acti­vi­ties that do not affect the actu­al admi­ni­stra­ti­on of justi­ce in indi­vi­du­al cases, such as anony­mi­sa­ti­on or pseud­ony­mi­sa­ti­on of judi­cial decis­i­ons, docu­ments or data, com­mu­ni­ca­ti­on bet­ween per­son­nel, admi­ni­stra­ti­ve tasks . 

(62) Wit­hout pre­ju­di­ce to the rules pro­vi­ded for in Regu­la­ti­on (EU) 2024/… of the Euro­pean Par­lia­ment and of the Council34+, and in order to address the risks of undue exter­nal inter­fe­rence with the right to vote enshri­ned in Artic­le 39 of the Char­ter, and of adver­se effects on demo­cra­cy and the rule of law, AI systems inten­ded to be used to influence the out­co­me of an elec­tion or refe­ren­dum or the voting beha­viour of natu­ral per­sons in the exer­cise of their vote in elec­tions or refe­ren­da should be clas­si­fi­ed as high-risk AI systems with the excep­ti­on of AI systems who­se out­put natu­ral per­sons are not direct­ly expo­sed to, such as tools used to orga­ni­se, opti­mi­se and struc­tu­re poli­ti­cal cam­paigns from an admi­ni­stra­ti­ve and logi­sti­cal point of view.

ANNEX IV Tech­ni­cal docu­men­ta­ti­on refer­red to in Artic­le 11(1)

The tech­ni­cal docu­men­ta­ti­on refer­red to in Artic­le 11(1) shall con­tain at least the fol­lo­wing infor­ma­ti­on, as appli­ca­ble to the rele­vant AI system:

1. A gene­ral descrip­ti­on of the AI system including:

(a) its inten­ded pur­po­se, the name of the pro­vi­der and the ver­si­on of the system reflec­ting its rela­ti­on to pre­vious versions;

(b) how the AI system inter­acts with, or can be used to inter­act with, hard­ware or soft­ware, inclu­ding with other AI systems, that are not part of the AI system its­elf, whe­re applicable;

(c) the ver­si­ons of rele­vant soft­ware or firm­ware, and any requi­re­ments rela­ted to ver­si­on updates;

(d) the descrip­ti­on of all the forms in which the AI system is pla­ced on the mar­ket or put into ser­vice, such as soft­ware packa­ges embedded into hard­ware, down­loads, or APIs;

(e) the descrip­ti­on of the hard­ware on which the AI system is inten­ded to run;

(f) whe­re the AI system is a com­po­nent of pro­ducts, pho­to­graphs or illu­stra­ti­ons show­ing exter­nal fea­tures, the mar­king and inter­nal lay­out of tho­se products;

(g) a basic descrip­ti­on of the user-inter­face pro­vi­ded to the deployer;

(h) ins­truc­tions for use for the deployer, and a basic descrip­ti­on of the user-inter­face pro­vi­ded to the deployer, whe­re applicable ;

2. A detail­ed descrip­ti­on of the ele­ments of the AI system and of the pro­cess for its deve­lo­p­ment, including:

(a) the methods and steps per­for­med for the deve­lo­p­ment of the AI system, inclu­ding, whe­re rele­vant, recour­se to pre-trai­ned systems or tools pro­vi­ded by third par­ties and how tho­se were used, inte­gra­ted or modi­fi­ed by the provider;

(b) the design spe­ci­fi­ca­ti­ons of the system, name­ly the gene­ral logic of the AI system and of the algo­rith­ms; the key design choices inclu­ding the ratio­na­le and assump­ti­ons made, inclu­ding with regard to per­sons or groups of per­sons in respect of who, the system is inten­ded to be used; the main clas­si­fi­ca­ti­on choices; what the system is desi­gned to opti­mi­se for, and the rele­van­ce of the dif­fe­rent para­me­ters; the descrip­ti­on of the expec­ted out­put and out­put qua­li­ty of the system; the decis­i­ons about any pos­si­ble trade-off made regar­ding the tech­ni­cal solu­ti­ons adopted to com­ply with the requi­re­ments set out in Chap­ter III, Sec­tion 2;

(c) the descrip­ti­on of the system archi­tec­tu­re explai­ning how soft­ware com­pon­ents build on or feed into each other and inte­gra­te into the over­all pro­ce­s­sing; the com­pu­ta­tio­nal resour­ces used to deve­lop, train, test and vali­da­te the AI system;

(d) whe­re rele­vant, the data requi­re­ments in terms of datas­heets describ­ing the trai­ning metho­do­lo­gies and tech­ni­ques and the trai­ning data sets used, inclu­ding a gene­ral descrip­ti­on of the­se data sets, infor­ma­ti­on about their pro­ven­an­ce, scope and main cha­rac­te­ri­stics; how the data was obtai­ned and sel­ec­ted; label­ling pro­ce­du­res (e.g. for super­vi­sed lear­ning), data clea­ning metho­do­lo­gies (e.g. out­liers detection);

(e) assess­ment of the human over­sight mea­su­res nee­ded in accordance with Artic­le 14, inclu­ding an assess­ment of the tech­ni­cal mea­su­res nee­ded to faci­li­ta­te the inter­pre­ta­ti­on of the out­puts of AI systems by the deployers, in accordance with Artic­le 13(3), point (d);

(f) whe­re appli­ca­ble, a detail­ed descrip­ti­on of pre-deter­mi­ned chan­ges to the AI system and its per­for­mance, tog­e­ther with all the rele­vant infor­ma­ti­on rela­ted to the tech­ni­cal solu­ti­ons adopted to ensu­re con­ti­nuous com­pli­ance of the AI system with the rele­vant requi­re­ments set out in Chap­ter III, Sec­tion 2;

(g) the vali­da­ti­on and test­ing pro­ce­du­res used, inclu­ding infor­ma­ti­on about the vali­da­ti­on and test­ing data used and their main cha­rac­te­ri­stics; metrics used to mea­su­re accu­ra­cy, robust­ness and com­pli­ance with other rele­vant requi­re­ments set out in Chap­ter III, Sec­tion 2, as well as poten­ti­al­ly dis­cri­mi­na­to­ry impacts; test logs and all test reports dated and signed by the respon­si­ble per­sons, inclu­ding with regard to pre¬determined chan­ges as refer­red to under point (f);

(h) cyber­se­cu­ri­ty mea­su­res put in place;

3. Detail­ed infor­ma­ti­on about the moni­to­ring, func­tio­ning and con­trol of the AI system, in par­ti­cu­lar with regard to: its capa­bi­li­ties and limi­ta­ti­ons in per­for­mance, inclu­ding the degrees of accu­ra­cy for spe­ci­fic per­sons or groups of per­sons on which the system is inten­ded to be used and the over­all expec­ted level of accu­ra­cy in rela­ti­on to its inten­ded pur­po­se; the fore­seeable unin­ten­ded out­co­mes and sources of risks to health and safe­ty, fun­da­men­tal rights and dis­cri­mi­na­ti­on in view of the inten­ded pur­po­se of the AI system; the human over­sight mea­su­res nee­ded in accordance with Artic­le 14, inclu­ding the tech­ni­cal mea­su­res put in place to faci­li­ta­te the inter­pre­ta­ti­on of the out­puts of AI systems by the deployers; spe­ci­fi­ca­ti­ons on input data, as appropriate;

4. A descrip­ti­on of the appro­pria­ten­ess of the per­for­mance metrics for the spe­ci­fic AI system;

5. A detail­ed descrip­ti­on of the risk manage­ment system in accordance with Artic­le 9;

6. A descrip­ti­on of rele­vant chan­ges made by the pro­vi­der to the system through its lifecycle;

7. A list of the har­mo­ni­s­ed stan­dards applied in full or in part the refe­ren­ces of which have been published in the Offi­ci­al Jour­nal of the Euro­pean Uni­on; whe­re no such har­mo­ni­s­ed stan­dards have been applied, a detail­ed descrip­ti­on of the solu­ti­ons adopted to meet the requi­re­ments set out in Chap­ter III, Sec­tion 2, inclu­ding a list of other rele­vant stan­dards and tech­ni­cal spe­ci­fi­ca­ti­ons applied;

8. A copy of the EU decla­ra­ti­on of con­for­mi­ty refer­red to in Artic­le 47;

9. A detail­ed descrip­ti­on of the system in place to eva­lua­te the AI system per­for­mance in the post-mar­ket pha­se in accordance with Artic­le 72, inclu­ding the post-mar­ket moni­to­ring plan refer­red to in Artic­le 72(3).

ANNEX V EU decla­ra­ti­on of conformity

The EU decla­ra­ti­on of con­for­mi­ty refer­red to in Artic­le 47, shall con­tain all of the fol­lo­wing information:

1. AI system name and type and any addi­tio­nal unam­bi­guous refe­rence allo­wing the iden­ti­fi­ca­ti­on and tracea­bi­li­ty of the AI system;

2. The name and address of the pro­vi­der or, whe­re appli­ca­ble, of their aut­ho­ri­sed representative;

3. A state­ment that the EU decla­ra­ti­on of con­for­mi­ty refer­red to in Artic­le 47 is issued under the sole respon­si­bi­li­ty of the provider;

4. A state­ment that the AI system is in con­for­mi­ty with this Regu­la­ti­on and, if appli­ca­ble, with any other rele­vant Uni­on law that pro­vi­des for the issuing of the EU decla­ra­ti­on of con­for­mi­ty refer­red to in Artic­le 47;

5. Whe­re an AI system invol­ves the pro­ce­s­sing of per­so­nal data, a state­ment that that AI system com­plies with Regu­la­ti­ons (EU) 2016/679 and (EU) 2018/1725 and Direc­ti­ve (EU) 2016/680;

6. Refe­ren­ces to any rele­vant har­mo­ni­s­ed stan­dards used or any other com­mon spe­ci­fi­ca­ti­on in rela­ti­on to which con­for­mi­ty is declared;

7. Whe­re appli­ca­ble, the name and iden­ti­fi­ca­ti­on num­ber of the noti­fi­ed body, a descrip­ti­on of the con­for­mi­ty assess­ment pro­ce­du­re per­for­med, and iden­ti­fi­ca­ti­on of the cer­ti­fi­ca­te issued;

8. The place and date of issue of the decla­ra­ti­on, the name and func­tion of the per­son who signed it, as well as an indi­ca­ti­on for, or on behalf of whom, that per­son signed, a signature.

ANNEX VI Con­for­mi­ty assess­ment pro­ce­du­re based on inter­nal control

1. The con­for­mi­ty assess­ment pro­ce­du­re based on inter­nal con­trol is the con­for­mi­ty assess­ment pro­ce­du­re based on points 2, 3 and 4.

2. The pro­vi­der veri­fi­es that the estab­lished qua­li­ty manage­ment system is in com­pli­ance with the requi­re­ments of Artic­le 17.

3. The pro­vi­der exami­nes the infor­ma­ti­on con­tai­ned in the tech­ni­cal docu­men­ta­ti­on in order to assess the com­pli­ance of the AI system with the rele­vant essen­ti­al requi­re­ments set out in Chap­ter III, Sec­tion 2.

4. The pro­vi­der also veri­fi­es that the design and deve­lo­p­ment pro­cess of the AI system and its post-mar­ket moni­to­ring as refer­red to in Artic­le 72 is con­si­stent with the tech­ni­cal documentation 

ANNEX VII Con­for­mi­ty based on an assess­ment of the qua­li­ty manage­ment system and an assess­ment of the tech­ni­cal documentation

1. Introduction

Con­for­mi­ty based on an assess­ment of the qua­li­ty manage­ment system and an assess­ment of the tech­ni­cal docu­men­ta­ti­on is the con­for­mi­ty assess­ment pro­ce­du­re based on points 2 to 5.

2. Overview

The appro­ved qua­li­ty manage­ment system for the design, deve­lo­p­ment and test­ing of AI systems pur­su­ant to Artic­le 17 shall be exami­ned in accordance with point 3 and shall be sub­ject to sur­veil­lan­ce as spe­ci­fi­ed in point 5. The tech­ni­cal docu­men­ta­ti­on of the AI system shall be exami­ned in accordance with point 4.

3. Qua­li­ty manage­ment system

3.1. The appli­ca­ti­on of the pro­vi­der shall include:

(a) the name and address of the pro­vi­der and, if the appli­ca­ti­on is lodged by an aut­ho­ri­sed repre­sen­ta­ti­ve, also their name and address;

(b) the list of AI systems cover­ed under the same qua­li­ty manage­ment system;

(c) the tech­ni­cal docu­men­ta­ti­on for each AI system cover­ed under the same qua­li­ty manage­ment system;

(d) the docu­men­ta­ti­on con­cer­ning the qua­li­ty manage­ment system which shall cover all the aspects listed under Artic­le 17;

(e) a descrip­ti­on of the pro­ce­du­res in place to ensu­re that the qua­li­ty manage­ment system remains ade­qua­te and effective;

(f) a writ­ten decla­ra­ti­on that the same appli­ca­ti­on has not been lodged with any other noti­fi­ed body.

3.2. The qua­li­ty manage­ment system shall be asses­sed by the noti­fi­ed body, which shall deter­mi­ne whe­ther it satis­fies the requi­re­ments refer­red to in Artic­le 17.

The decis­i­on shall be noti­fi­ed to the pro­vi­der or its aut­ho­ri­sed representative.

The noti­fi­ca­ti­on shall con­tain the con­clu­si­ons of the assess­ment of the qua­li­ty manage­ment system and the rea­so­ned assess­ment decision.

3.3. The qua­li­ty manage­ment system as appro­ved shall con­ti­n­ue to be imple­men­ted and main­tai­ned by the pro­vi­der so that it remains ade­qua­te and efficient.

3.4. Any inten­ded chan­ge to the appro­ved qua­li­ty manage­ment system or the list of AI systems cover­ed by the lat­ter shall be brought to the atten­ti­on of the noti­fi­ed body by the provider.

The pro­po­sed chan­ges shall be exami­ned by the noti­fi­ed body, which shall deci­de whe­ther the modi­fi­ed qua­li­ty manage­ment system con­ti­nues to satis­fy the requi­re­ments refer­red to in point 3.2 or whe­ther a reas­sess­ment is necessary.

The noti­fi­ed body shall noti­fy the pro­vi­der of its decis­i­on. The noti­fi­ca­ti­on shall con­tain the con­clu­si­ons of the exami­na­ti­on of the chan­ges and the rea­so­ned assess­ment decision.

4. Con­trol of the tech­ni­cal documentation.

4.1. In addi­ti­on to the appli­ca­ti­on refer­red to in point 3, an appli­ca­ti­on with a noti­fi­ed body of their choice shall be lodged by the pro­vi­der for the assess­ment of the tech­ni­cal docu­men­ta­ti­on rela­ting to the AI system which the pro­vi­der intends to place on the mar­ket or put into ser­vice and which is cover­ed by the qua­li­ty manage­ment system refer­red to under point 3.

4.2. The appli­ca­ti­on shall include:

(a) the name and address of the provider;

(b) a writ­ten decla­ra­ti­on that the same appli­ca­ti­on has not been lodged with any other noti­fi­ed body;

(c) the tech­ni­cal docu­men­ta­ti­on refer­red to in Annex IV.

4.3. The tech­ni­cal docu­men­ta­ti­on shall be exami­ned by the noti­fi­ed body. Whe­re rele­vant, and limi­t­ed to what is neces­sa­ry to ful­fil its tasks, the noti­fi­ed body shall be gran­ted full access to the trai­ning, vali­da­ti­on, and test­ing data sets used, inclu­ding, whe­re appro­pria­te and sub­ject to secu­ri­ty safe­guards, through API or other rele­vant tech­ni­cal means and tools enab­ling remo­te access.

4.4. In exami­ning the tech­ni­cal docu­men­ta­ti­on, the noti­fi­ed body may requi­re that the pro­vi­der sup­p­ly fur­ther evi­dence or car­ry out fur­ther tests so as to enable a pro­per assess­ment of the con­for­mi­ty of the AI system with the requi­re­ments set out in Chap­ter III, Sec­tion 2. Whe­re the noti­fi­ed body is not satis­fied with the tests car­ri­ed out by the pro­vi­der, the noti­fi­ed body shall its­elf direct­ly car­ry out ade­qua­te tests, as appropriate.

4.5. Whe­re neces­sa­ry to assess the con­for­mi­ty of the high-risk AI system with the requi­re­ments set out in Chap­ter III, Sec­tion 2, after all other rea­sonable means to veri­fy con­for­mi­ty have been exhau­sted and have pro­ven to be insuf­fi­ci­ent, and upon a rea­so­ned request, the noti­fi­ed body shall also be gran­ted access to the trai­ning and trai­ned models of the AI system, inclu­ding its rele­vant para­me­ters. Such access shall be sub­ject to exi­sting Uni­on law on the pro­tec­tion of intellec­tu­al pro­per­ty and trade secrets.

4.6. The decis­i­on of the noti­fi­ed body shall be noti­fi­ed to the pro­vi­der or its aut­ho­ri­sed repre­sen­ta­ti­ve. The noti­fi­ca­ti­on shall con­tain the con­clu­si­ons of the assess­ment of the tech­ni­cal docu­men­ta­ti­on and the rea­so­ned assess­ment decision.

Whe­re the AI system is in con­for­mi­ty with the requi­re­ments set out in Chap­ter III, Sec­tion 2, the noti­fi­ed body shall issue a Uni­on tech­ni­cal docu­men­ta­ti­on assess­ment cer­ti­fi­ca­te. The cer­ti­fi­ca­te shall indi­ca­te the name and address of the pro­vi­der, the con­clu­si­ons of the exami­na­ti­on, the con­di­ti­ons (if any) for its vali­di­ty and the data neces­sa­ry for the iden­ti­fi­ca­ti­on of the AI system.

The cer­ti­fi­ca­te and its anne­xes shall con­tain all rele­vant infor­ma­ti­on to allow the con­for­mi­ty of the AI system to be eva­lua­ted, and to allow for con­trol of the AI system while in use, whe­re applicable.

Whe­re the AI system is not in con­for­mi­ty with the requi­re­ments set out in Chap­ter III, Sec­tion 2, the noti­fi­ed body shall refu­se to issue a Uni­on tech­ni­cal docu­men­ta­ti­on assess­ment cer­ti­fi­ca­te and shall inform the appli­cant accor­din­gly, giving detail­ed rea­sons for its refusal. 

Whe­re the AI system does not meet the requi­re­ment rela­ting to the data used to train it, re-trai­ning of the AI system will be nee­ded pri­or to the appli­ca­ti­on for a new con­for­mi­ty assess­ment. In this case, the rea­so­ned assess­ment decis­i­on of the noti­fi­ed body refu­sing to issue the Uni­on tech­ni­cal docu­men­ta­ti­on assess­ment cer­ti­fi­ca­te shall con­tain spe­ci­fic con­side­ra­ti­ons on the qua­li­ty data used to train the AI system, in par­ti­cu­lar on the rea­sons for non-compliance.

4.7. Any chan­ge to the AI system that could affect the com­pli­ance of the AI system with the requi­re­ments or its inten­ded pur­po­se shall be asses­sed by the noti­fi­ed body which issued the Uni­on tech­ni­cal docu­men­ta­ti­on assess­ment cer­ti­fi­ca­te. The pro­vi­der shall inform such noti­fi­ed body of its inten­ti­on to intro­du­ce any of the abo­ve­men­tio­ned chan­ges, or if it other­wi­se beco­mes awa­re of the occur­rence of such chan­ges. The inten­ded chan­ges shall be asses­sed by the noti­fi­ed body, which shall deci­de whe­ther tho­se chan­ges requi­re a new con­for­mi­ty assess­ment in accordance with Artic­le 43(4) or whe­ther they could be addres­sed by means of a sup­ple­ment to the Uni­on tech­ni­cal docu­men­ta­ti­on assess­ment cer­ti­fi­ca­te. In the lat­ter case, the noti­fi­ed body shall assess the chan­ges, noti­fy the pro­vi­der of its decis­i­on and, whe­re the chan­ges are appro­ved, issue to the pro­vi­der a sup­ple­ment to the Uni­on tech­ni­cal docu­men­ta­ti­on assess­ment certificate.

5. Sur­veil­lan­ce of the appro­ved qua­li­ty manage­ment system.

5.1. The pur­po­se of the sur­veil­lan­ce car­ri­ed out by the noti­fi­ed body refer­red to in Point 3 is to

make sure that the pro­vi­der duly com­plies with the terms and con­di­ti­ons of the appro­ved qua­li­ty manage­ment system.

5.2. For assess­ment pur­po­ses, the pro­vi­der shall allow the noti­fi­ed body to access the pre­mi­ses whe­re the design, deve­lo­p­ment, test­ing of the AI systems is taking place. The pro­vi­der shall fur­ther share with the noti­fi­ed body all neces­sa­ry information.

5.3. The noti­fi­ed body shall car­ry out peri­odic audits to make sure that the pro­vi­der main­ta­ins and applies the qua­li­ty manage­ment system and shall pro­vi­de the pro­vi­der with an audit report. In the con­text of tho­se audits, the noti­fi­ed body may car­ry out addi­tio­nal tests of the AI systems for which a Uni­on tech­ni­cal docu­men­ta­ti­on assess­ment cer­ti­fi­ca­te was issued. 

ANNEX VIII Infor­ma­ti­on to be sub­mit­ted upon the regi­stra­ti­on of high-risk AI systems in accordance with Artic­le 49

Sec­tion A – Infor­ma­ti­on to be sub­mit­ted by pro­vi­ders of high-risk AI systems in accordance with Artic­le 49(1)

The fol­lo­wing infor­ma­ti­on shall be pro­vi­ded and the­re­af­ter kept up to date with regard to high-risk AI systems to be regi­stered in accordance with Artic­le 49(1):

1. The name, address and cont­act details of the provider;

2. Whe­re sub­mis­si­on of infor­ma­ti­on is car­ri­ed out by ano­ther per­son on behalf of the pro­vi­der, the name, address and cont­act details of that person;

3. The name, address and cont­act details of the aut­ho­ri­sed repre­sen­ta­ti­ve, whe­re applicable;

4. The AI system trade name and any addi­tio­nal unam­bi­guous refe­rence allo­wing the iden­ti­fi­ca­ti­on and tracea­bi­li­ty of the AI system;

5. A descrip­ti­on of the inten­ded pur­po­se of the AI system and of the com­pon­ents and func­tions sup­port­ed through this AI system;

6. A basic and con­cise descrip­ti­on of the infor­ma­ti­on used by the system (data, inputs) and its ope­ra­ting logic;

7. The sta­tus of the AI system (on the mar­ket, or in ser­vice; no lon­ger pla­ced on the market/in ser­vice, recalled);

8. The type, num­ber and expiry date of the cer­ti­fi­ca­te issued by the noti­fi­ed body and the name or iden­ti­fi­ca­ti­on num­ber of that noti­fi­ed body, whe­re applicable;

9. A scan­ned copy of the cer­ti­fi­ca­te refer­red to in point 8, whe­re applicable;

10. Any Mem­ber Sta­tes in which the AI system has been pla­ced on the mar­ket, put into ser­vice or made available in the Union;

11. A copy of the EU decla­ra­ti­on of con­for­mi­ty refer­red to in Artic­le 47;

12. Elec­tro­nic ins­truc­tions for use; this infor­ma­ti­on shall not be pro­vi­ded for high-risk AI systems in the are­as of law enforce­ment or migra­ti­on, asyl­um and bor­der con­trol manage­ment refer­red to in Annex III, points 1, 6 and 7;

13. A URL for addi­tio­nal infor­ma­ti­on (optio­nal).

Sec­tion B – Infor­ma­ti­on to be sub­mit­ted by pro­vi­ders of high-risk AI systems in accordance with Artic­le 49(2)

The fol­lo­wing infor­ma­ti­on shall be pro­vi­ded and the­re­af­ter kept up to date with regard to AI systems to be regi­stered in accordance with Artic­le 49(2):

1. The name, address and cont­act details of the provider;

2. Whe­re sub­mis­si­on of infor­ma­ti­on is car­ri­ed out by ano­ther per­son on behalf of the pro­vi­der, the name, address and cont­act details of that person;

3. The name, address and cont­act details of the aut­ho­ri­sed repre­sen­ta­ti­ve, whe­re applicable;

4. The AI system trade name and any addi­tio­nal unam­bi­guous refe­rence allo­wing the iden­ti­fi­ca­ti­on and tracea­bi­li­ty of the AI system;

5. A descrip­ti­on of the inten­ded pur­po­se of the AI system;

6. The con­di­ti­on or con­di­ti­ons under Artic­le 6(3)based on which the AI system is con­side­red to be not-high-risk;

7. A short sum­ma­ry of the grounds on which the AI system is con­side­red to be not-high-risk in appli­ca­ti­on of the pro­ce­du­re under Artic­le 6(3);

8. The sta­tus of the AI system (on the mar­ket, or in ser­vice; no lon­ger pla­ced on the market/in ser­vice, recalled);

9. Any Mem­ber Sta­tes in which the AI system has been pla­ced on the mar­ket, put into ser­vice or made available in the Union.

Sec­tion C – Infor­ma­ti­on to be sub­mit­ted by deployers of high-risk AI systems in accordance with Artic­le 49(3)

The fol­lo­wing infor­ma­ti­on shall be pro­vi­ded and the­re­af­ter kept up to date with regard to high-risk AI systems to be regi­stered in accordance with Artic­le 49:

1. The name, address and cont­act details of the deployer;

2. The name, address and cont­act details of the per­son sub­mit­ting infor­ma­ti­on on behalf of the deployer;

3. The URL of the ent­ry of the AI system in the EU data­ba­se by its provider;

4. A sum­ma­ry of the fin­dings of the fun­da­men­tal rights impact assess­ment con­duc­ted in accordance with Artic­le 27;

5. A sum­ma­ry of the data pro­tec­tion impact assess­ment car­ri­ed out in accordance with Artic­le 35 of Regu­la­ti­on (EU) 2016/679 or Artic­le 27 of Direc­ti­ve (EU) 2016/680 as spe­ci­fi­ed in Artic­le 26(8) of this Regu­la­ti­on, whe­re applicable. 

ANNEX IX Infor­ma­ti­on to be sub­mit­ted upon the regi­stra­ti­on of high-risk AI systems listed in Annex III in rela­ti­on to test­ing in real world con­di­ti­ons in accordance with Artic­le 60

The fol­lo­wing infor­ma­ti­on shall be pro­vi­ded and the­re­af­ter kept up to date with regard to test­ing in real world con­di­ti­ons to be regi­stered in accordance with Artic­le 60:

1. A Uni­on-wide uni­que sin­gle iden­ti­fi­ca­ti­on num­ber of the test­ing in real world conditions;

2. The name and cont­act details of the pro­vi­der or pro­s­pec­ti­ve pro­vi­der and of the deployers invol­ved in the test­ing in real world conditions;

3. A brief descrip­ti­on of the AI system, its inten­ded pur­po­se, and other infor­ma­ti­on neces­sa­ry for the iden­ti­fi­ca­ti­on of the system;

4. A sum­ma­ry of the main cha­rac­te­ri­stics of the plan for test­ing in real world conditions;

5. Infor­ma­ti­on on the sus­pen­si­on or ter­mi­na­ti­on of the test­ing in real world conditions. 

ANNEX X Uni­on legis­la­ti­ve acts on lar­ge-sca­le IT systems in the area of Free­dom, Secu­ri­ty and Justice

1. Schen­gen Infor­ma­ti­on System

(a) Regu­la­ti­on (EU) 2018/1860 of the Euro­pean Par­lia­ment and of the Coun­cil of 28 Novem­ber 2018 on the use of the Schen­gen Infor­ma­ti­on System for the return of ille­gal­ly stay­ing third-coun­try natio­nals (OJ L 312, 7.12.2018, p. 1).

(b) Regu­la­ti­on (EU) 2018/1861 of the Euro­pean Par­lia­ment and of the Coun­cil of 28 Novem­ber 2018 on the estab­lish­ment, ope­ra­ti­on and use of the Schen­gen Infor­ma­ti­on System (SIS) in the field of bor­der checks, and amen­ding the Con­ven­ti­on imple­men­ting the Schen­gen Agree­ment, and amen­ding and repe­al­ing Regu­la­ti­on (EC) No 1987/2006 (OJ L 312, 7.12.2018, p. 14).

(c) Regu­la­ti­on (EU) 2018/1862 of the Euro­pean Par­lia­ment and of the Coun­cil of 28 Novem­ber 2018 on the estab­lish­ment, ope­ra­ti­on and use of the Schen­gen Infor­ma­ti­on System (SIS) in the field of poli­ce coope­ra­ti­on and judi­cial coope­ra­ti­on in cri­mi­nal mat­ters, amen­ding and repe­al­ing Coun­cil Decis­i­on 2007/533/JHA, and repe­al­ing Regu­la­ti­on (EC) No 1986/2006 of the Euro­pean Par­lia­ment and of the Coun­cil and Com­mis­si­on Decis­i­on 2010/261/EU (OJ L 312, 7.12.2018, p. 56).

2. Visa Infor­ma­ti­on System

(a) Regu­la­ti­on (EU) 2021/1133 of the Euro­pean Par­lia­ment and of the Coun­cil of 7 July 2021 amen­ding Regu­la­ti­ons (EU) No 603/2013, (EU) 2016/794, (EU) 2018/1862, (EU) 2019/816 and (EU) 2019/818 as regards the estab­lish­ment of the con­di­ti­ons for acce­s­sing other EU infor­ma­ti­on systems for the pur­po­ses of the Visa Infor­ma­ti­on System (OJ L 248, 13.7.2021, p. 1).

(b) Regu­la­ti­on (EU) 2021/1134 of the Euro­pean Par­lia­ment and of the Coun­cil of 7 July 2021 amen­ding Regu­la­ti­ons (EC) No 767/2008, (EC) No 810/2009, (EU) 2016/399, (EU) 2017/2226, (EU) 2018/1240, (EU) 2018/1860, (EU) 2018/1861, (EU) 2019/817 and (EU) 2019/1896 of the Euro­pean Par­lia­ment and of the Coun­cil and repe­al­ing Coun­cil Decis­i­ons 2004/512/EC and 2008/633/JHA, for the pur­po­se of reforming the Visa Infor­ma­ti­on System (OJ L 248, 13.7.2021, p. 11).

3. Eurodac

Regu­la­ti­on (EU) 2024/… of the Euro­pean Par­lia­ment and of the Coun­cil on the estab­lish­ment of ‘Euro­dac’ for the com­pa­ri­son of bio­me­tric data in order to effec­tively app­ly Regu­la­ti­ons (EU) 2024/… and (EU) 2024/… of the Euro­pean Par­lia­ment and of the Coun­cil and Coun­cil Direc­ti­ve 2001/55/EC and to iden­ti­fy ille­gal­ly stay­ing third-coun­try natio­nals and sta­te­l­ess per­sons and on requests for the com­pa­ri­son with Euro­dac data by Mem­ber Sta­tes’ law enforce­ment aut­ho­ri­ties and Euro­pol for law enforce­ment pur­po­ses, amen­ding Regu­la­ti­ons (EU) 2018/1240 and (EU) 2019/818 of the Euro­pean Par­lia­ment and of the Coun­cil and repe­al­ing Regu­la­ti­on (EU) No 603/2013 of the Euro­pean Par­lia­ment and of the Council+.

4. Entry/Exit System

Regu­la­ti­on (EU) 2017/2226 of the Euro­pean Par­lia­ment and of the Coun­cil of 30 Novem­ber 2017 estab­li­shing an Entry/Exit System (EES) to regi­ster ent­ry and exit data and refu­sal of ent­ry data of third-coun­try natio­nals crossing the exter­nal bor­ders of the Mem­ber Sta­tes and deter­mi­ning the con­di­ti­ons for access to the EES for law enforce­ment pur­po­ses, and amen­ding the Con­ven­ti­on imple­men­ting the Schen­gen Agree­ment and Regu­la­ti­ons (EC) No 767/2008 and (EU) No 1077/2011 (OJ L 327, 9.12.2017, p. 20).

5. Euro­pean Tra­vel Infor­ma­ti­on and Aut­ho­ri­sa­ti­on System

(a) Regu­la­ti­on (EU) 2018/1240 of the Euro­pean Par­lia­ment and of the Coun­cil of 12 Sep­tem­ber 2018 estab­li­shing a Euro­pean Tra­vel Infor­ma­ti­on and Aut­ho­ri­sa­ti­on System (ETIAS) and amen­ding Regu­la­ti­ons (EU) No 1077/2011, (EU) No 515/2014, (EU) 2016/399, (EU) 2016/1624 and (EU) 2017/2226 (OJ L 236, 19.9.2018, p. 1).

(b) Regu­la­ti­on (EU) 2018/1241 of the Euro­pean Par­lia­ment and of the Coun­cil of

12 Sep­tem­ber 2018 amen­ding Regu­la­ti­on (EU) 2016/794 for the pur­po­se of estab­li­shing a Euro­pean Tra­vel Infor­ma­ti­on and Aut­ho­ri­sa­ti­on System (ETIAS) (OJ L 236, 19.9.2018, p. 72).

6. Euro­pean Cri­mi­nal Records Infor­ma­ti­on System on third-coun­try natio­nals and sta­te­l­ess persons 

Regu­la­ti­on (EU) 2019/816 of the Euro­pean Par­lia­ment and of the Coun­cil of 17 April 2019 estab­li­shing a cen­tra­li­sed system for the iden­ti­fi­ca­ti­on of Mem­ber Sta­tes hol­ding con­vic­tion infor­ma­ti­on on third-coun­try natio­nals and sta­te­l­ess per­sons (ECRIS- TCN) to sup­ple­ment the Euro­pean Cri­mi­nal Records Infor­ma­ti­on System and amen­ding Regu­la­ti­on (EU) 2018/1726 (OJ L 135, 22.5.2019, p. 1).

7. Interoperability

(a) Regu­la­ti­on (EU) 2019/817 of the Euro­pean Par­lia­ment and of the Coun­cil of 20 May 2019 on estab­li­shing a frame­work for inter­ope­ra­bi­li­ty bet­ween EU infor­ma­ti­on systems in the field of bor­ders and visa and amen­ding Regu­la­ti­ons (EC) No 767/2008, (EU) 2016/399, (EU) 2017/2226, (EU) 2018/1240, (EU) 2018/1726 and (EU) 2018/1861 of the Euro­pean Par­lia­ment and of the Coun­cil and Coun­cil Decis­i­ons 2004/512/EC and 2008/633/JHA (OJ L 135, 22.5.2019, p. 27).

(b) Regu­la­ti­on (EU) 2019/818 of the Euro­pean Par­lia­ment and of the Coun­cil of 20 May 2019 on estab­li­shing a frame­work for inter­ope­ra­bi­li­ty bet­ween EU infor­ma­ti­on systems in the field of poli­ce and judi­cial coope­ra­ti­on, asyl­um and migra­ti­on and amen­ding Regu­la­ti­ons (EU) 2018/1726, (EU) 2018/1862 and (EU) 2019/816 (OJ L 135, 22.5.2019, p. 85).

ANNEX XI Tech­ni­cal docu­men­ta­ti­on refer­red to in Artic­le 53(1), point (a) – tech­ni­cal docu­men­ta­ti­on for pro­vi­ders of gene­ral-pur­po­se AI models

Sec­tion 1 Infor­ma­ti­on to be pro­vi­ded by all pro­vi­ders of gene­ral-pur­po­se AI models

The tech­ni­cal docu­men­ta­ti­on refer­red to in Artic­le 53(1), point (a) shall con­tain at least the fol­lo­wing infor­ma­ti­on as appro­pria­te to the size and risk pro­fi­le of the model:

1. A gene­ral descrip­ti­on of the gene­ral-pur­po­se AI model including:

(a) the tasks that the model is inten­ded to per­form and the type and natu­re of AI systems in which it can be integrated;

(b) the accep­ta­ble use poli­ci­es applicable;

(c) the date of release and methods of distribution;

(d) the archi­tec­tu­re and num­ber of parameters;

(e) the moda­li­ty (e.g. text, image) and for­mat of inputs and outputs;

(f) the licence.

2. A detail­ed descrip­ti­on of the ele­ments of the model refer­red to in point 1, and rele­vant infor­ma­ti­on of the pro­cess for the deve­lo­p­ment, inclu­ding the fol­lo­wing elements:

(a) the tech­ni­cal means (e.g. ins­truc­tions of use, infras­truc­tu­re, tools) requi­red for the gene­ral-pur­po­se AI model to be inte­gra­ted in AI systems;

(b) the design spe­ci­fi­ca­ti­ons of the model and trai­ning pro­cess, inclu­ding trai­ning metho­do­lo­gies and tech­ni­ques, the key design choices inclu­ding the ratio­na­le and assump­ti­ons made; what the model is desi­gned to opti­mi­se for and the rele­van­ce of the dif­fe­rent para­me­ters, as applicable;

(c) infor­ma­ti­on on the data used for trai­ning, test­ing and vali­da­ti­on, whe­re appli­ca­ble, inclu­ding the type and pro­ven­an­ce of data and cura­ti­on metho­do­lo­gies (e.g. clea­ning, fil­te­ring etc), the num­ber of data points, their scope and main cha­rac­te­ri­stics; how the data was obtai­ned and sel­ec­ted as well as all other mea­su­res to detect the unsui­ta­bi­li­ty of data sources and methods to detect iden­ti­fia­ble bia­ses, whe­re applicable; 

(d) the com­pu­ta­tio­nal resour­ces used to train the model (e.g. num­ber of floa­ting point ope­ra­ti­ons ), trai­ning time, and other rele­vant details rela­ted to the training;

(e) known or esti­ma­ted ener­gy con­sump­ti­on of the model.

With regard to point (e), whe­re the ener­gy con­sump­ti­on of the model is unknown, the ener­gy con­sump­ti­on may be based on infor­ma­ti­on about com­pu­ta­tio­nal resour­ces used.

Sec­tion 2 Addi­tio­nal infor­ma­ti­on to be pro­vi­ded by pro­vi­ders of gene­ral-pur­po­se AI models with syste­mic risk

1. A detail­ed descrip­ti­on of the eva­lua­ti­on stra­te­gies, inclu­ding eva­lua­ti­on results, on the basis of available public eva­lua­ti­on pro­to­cols and tools or other­wi­se of other eva­lua­ti­on metho­do­lo­gies. Eva­lua­ti­on stra­te­gies shall include eva­lua­ti­on cri­te­ria, metrics and the metho­do­lo­gy on the iden­ti­fi­ca­ti­on of limitations.

2. Whe­re appli­ca­ble, a detail­ed descrip­ti­on of the mea­su­res put in place for the pur­po­se of con­duc­ting inter­nal and/or exter­nal adver­sa­ri­al test­ing (e.g., red team­ing), model adap­t­ati­ons, inclu­ding ali­gnment and fine-tuning.

3. Whe­re appli­ca­ble, a detail­ed descrip­ti­on of the system archi­tec­tu­re explai­ning how soft­ware com­pon­ents build or feed into each other and inte­gra­te into the over­all processing.

ANNEX XII Trans­pa­ren­cy infor­ma­ti­on refer­red to in Artic­le 53(1), point (b) – tech­ni­cal docu­men­ta­ti­on for pro­vi­ders of gene­ral-pur­po­se AI models to down­stream pro­vi­ders that inte­gra­te the model into their AI system

The infor­ma­ti­on refer­red to in Artic­le 53(1), point (b) shall con­tain at least the following:

1. A gene­ral descrip­ti­on of the gene­ral-pur­po­se AI model including:

(a) the tasks that the model is inten­ded to per­form and the type and natu­re of AI systems into which it can be integrated;

(b) the accep­ta­ble use poli­ci­es applicable;

(c) the date of release and methods of distribution;

(d) how the model inter­acts, or can be used to inter­act, with hard­ware or soft­ware that is not part of the model its­elf, whe­re applicable;

(e) the ver­si­ons of rele­vant soft­ware rela­ted to the use of the gene­ral-pur­po­se AI model, whe­re applicable;

(f) the archi­tec­tu­re and num­ber of parameters;

(g) the moda­li­ty (e.g., text, image) and for­mat of inputs and outputs;

(h) the licence for the model.

2. A descrip­ti­on of the ele­ments of the model and of the pro­cess for its deve­lo­p­ment, including:

(a) the tech­ni­cal means (e.g., ins­truc­tions for use, infras­truc­tu­re, tools) requi­red for the gene­ral-pur­po­se AI model to be inte­gra­ted into AI systems;

(b) the moda­li­ty (e.g., text, image, etc.) and for­mat of the inputs and out­puts and their maxi­mum size (e.g., con­text win­dow length, etc.);

(c) infor­ma­ti­on on the data used for trai­ning, test­ing and vali­da­ti­on, whe­re appli­ca­ble, inclu­ding the type and pro­ven­an­ce of data and cura­ti­on methodologies. 

ANNEX XIII Cri­te­ria for the desi­gna­ti­on of gene­ral-pur­po­se AI models with syste­mic risk refer­red to in Artic­le 51

For the pur­po­se of deter­mi­ning that a gene­ral-pur­po­se AI model has capa­bi­li­ties or an impact equi­va­lent to tho­se set out in Artic­le 51(1), point (a), the Com­mis­si­on shall take into account the fol­lo­wing criteria:

(a) the num­ber of para­me­ters of the model;

(b) the qua­li­ty or size of the data set, for exam­p­le mea­su­red through tokens;

(c) the amount of com­pu­ta­ti­on used for trai­ning the model, mea­su­red in floa­ting point ope­ra­ti­ons or indi­ca­ted by a com­bi­na­ti­on of other varia­bles such as esti­ma­ted cost of trai­ning, esti­ma­ted time requi­red for the trai­ning, or esti­ma­ted ener­gy con­sump­ti­on for the training;

(d) the input and out­put moda­li­ties of the model, such as text to text (lar­ge lan­guage models), text to image, mul­ti-moda­li­ty, and the sta­te of the art thres­holds for deter­mi­ning high-impact capa­bi­li­ties for each moda­li­ty, and the spe­ci­fic type of inputs and out­puts (e.g. bio­lo­gi­cal sequences);

(e) the bench­marks and eva­lua­tions of capa­bi­li­ties of the model, inclu­ding con­side­ring the num­ber of tasks wit­hout addi­tio­nal trai­ning, adap­ta­bi­li­ty to learn new, distinct tasks, its level of auto­no­my and sca­la­bi­li­ty, the tools it has access to;

(f) whe­ther it has a high impact on the inter­nal mar­ket due to its reach, which shall be pre­su­med when it has been made available to at least 10 000 regi­stered busi­ness users estab­lished in the Union;

(g) the num­ber of regi­stered end-users.