AI Act (KI-Ver­ord­nung)

Ent­wurf der fina­len Fas­sung des AI Act. Vor­be­hal­ten blei­ben letz­te ins­be­son­de­re redak­tio­nel­le Anpas­sun­gen (sie­he hier).
aus­klap­pen | ein­klap­pen

Erwä­gungs­grün­de

(1) The pur­po­se of this Regu­la­ti­on is to impro­ve the func­tio­ning of the inter­nal mar­ket by lay­ing down a uni­form legal frame­work in par­ti­cu­lar for the deve­lo­p­ment, pla­cing on the mar­ket, put­ting into ser­vice and the use of arti­fi­ci­al intel­li­gence systems in the Uni­on in con­for­mi­ty with Uni­on values, to pro­mo­te the upt­ake of human cen­tric and trust­wor­t­hy arti­fi­ci­al intel­li­gence while ensu­ring a high level of pro­tec­tion of health, safe­ty, fun­da­men­tal rights enshri­ned in the Char­ter, inclu­ding demo­cra­cy and rule of law and envi­ron­men­tal pro­tec­tion, against harmful effects of arti­fi­ci­al intel­li­gence systems in the Uni­on and to sup­port inno­va­ti­on. This regu­la­ti­on ensu­res the free move­ment of AI-based goods and ser­vices cross-bor­der, thus pre­ven­ting Mem­ber Sta­tes from impo­sing rest­ric­tions on the deve­lo­p­ment, mar­ke­ting and use of Arti­fi­ci­al Intel­li­gence systems (AI systems), unless expli­ci­t­ly aut­ho­ri­sed by this Regu­la­ti­on. (1a) This Regu­la­ti­on should be applied in con­for­mi­ty with the values of the Uni­on enshri­ned in the Char­ter faci­li­ta­ting the pro­tec­tion of indi­vi­du­als, com­pa­nies, demo­cra­cy and rule of law and the envi­ron­ment while boo­sting inno­va­ti­on and employment and making the Uni­on a lea­der in the upt­ake of trust­wor­t­hy AI. (2) AI systems can be easi­ly deployed in mul­ti­ple sec­tors of the eco­no­my and socie­ty, inclu­ding cross bor­der, and cir­cu­la­te throug­hout the Uni­on. Cer­tain Mem­ber Sta­tes have alre­a­dy explo­red the adop­ti­on of natio­nal rules to ensu­re that arti­fi­ci­al intel­li­gence is trust­wor­t­hy and safe and is deve­lo­ped and used in com­pli­ance with fun­da­men­tal rights obli­ga­ti­ons. Dif­fe­ring natio­nal rules may lead to frag­men­ta­ti­on of the inter­nal mar­ket and decrea­se legal cer­tain­ty for ope­ra­tors that deve­lop, import or use AI systems. A con­si­stent and high level of pro­tec­tion throug­hout the Uni­on should the­r­e­fo­re be ensu­red in order to achie­ve trust­wor­t­hy AI, while diver­gen­ces ham­pe­ring the free cir­cula­ti­on, inno­va­ti­on, deployment and upt­ake of AI systems and rela­ted pro­ducts and ser­vices within the inter­nal mar­ket should be pre­ven­ted, by lay­ing down uni­form obli­ga­ti­ons for ope­ra­tors and gua­ran­te­e­ing the uni­form pro­tec­tion of over­ri­ding rea­sons of public inte­rest and of rights of per­sons throug­hout the inter­nal mar­ket based on Artic­le 114 of the Trea­ty on the Func­tio­ning of the Euro­pean Uni­on (TFEU). To the ext­ent that this Regu­la­ti­on con­ta­ins spe­ci­fic rules on the pro­tec­tion of indi­vi­du­als with regard to the pro­ce­s­sing of per­so­nal data con­cer­ning rest­ric­tions of the use of AI systems for remo­te bio­me­tric iden­ti­fi­ca­ti­on for the pur­po­se of law enforce­ment, for the use of AI systems for risk assess­ments of natu­ral per­sons for the pur­po­se of law enforce­ment and for the use of AI systems of bio­me­tric cate­go­rizati­on for the pur­po­se of law enforce­ment, it is appro­pria­te to base this Regu­la­ti­on, in as far as tho­se spe­ci­fic rules are con­cer­ned, on Artic­le 16 of the TFEU. In light of tho­se spe­ci­fic rules and the recour­se to Artic­le 16 TFEU, it is appro­pria­te to con­sult the Euro­pean Data Pro­tec­tion Board. (3) Arti­fi­ci­al intel­li­gence is a fast evol­ving fami­ly of tech­no­lo­gies that con­tri­bu­tes to a wide array of eco­no­mic, envi­ron­men­tal and socie­tal bene­fits across the enti­re spec­trum of indu­stries and social acti­vi­ties. By impro­ving pre­dic­tion, opti­mi­sing ope­ra­ti­ons and resour­ce allo­ca­ti­on, and per­so­na­li­sing digi­tal solu­ti­ons available for indi­vi­du­als and orga­ni­sa­ti­ons, the use of arti­fi­ci­al intel­li­gence can pro­vi­de key com­pe­ti­ti­ve advan­ta­ges to com­pa­nies and sup­port soci­al­ly and envi­ron­men­tal­ly bene­fi­ci­al out­co­mes, for exam­p­le in heal­th­ca­re, far­ming, food safe­ty, edu­ca­ti­on and trai­ning, media, sports, cul­tu­re, infras­truc­tu­re manage­ment, ener­gy, trans­port and logi­stics, public ser­vices, secu­ri­ty, justi­ce, resour­ce and ener­gy effi­ci­en­cy, envi­ron­men­tal moni­to­ring, the con­ser­va­ti­on and resto­ra­ti­on of bio­di­ver­si­ty and eco­sy­stems and cli­ma­te chan­ge miti­ga­ti­on and adap­t­ati­on. (4) At the same time, depen­ding on the cir­cum­stances regar­ding its spe­ci­fic appli­ca­ti­on, use, and level of tech­no­lo­gi­cal deve­lo­p­ment, arti­fi­ci­al intel­li­gence may gene­ra­te risks and cau­se harm to public inte­rests and fun­da­men­tal rights that are pro­tec­ted by Uni­on law. Such harm might be mate­ri­al or imma­te­ri­al, inclu­ding phy­si­cal, psy­cho­lo­gi­cal, socie­tal or eco­no­mic harm. (4a) Given the major impact that arti­fi­ci­al intel­li­gence can have on socie­ty and the need to build trust, it is vital for arti­fi­ci­al intel­li­gence and its regu­la­to­ry frame­work to be deve­lo­ped accor­ding to Uni­on values enshri­ned in Artic­le 2 TEU, the fun­da­men­tal rights and free­doms enshri­ned in the Trea­ties, the Char­ter. As a pre-requi­si­te, arti­fi­ci­al intel­li­gence should be a human-cen­tric tech­no­lo­gy. It should ser­ve as a tool for peo­p­le, with the ulti­ma­te aim of incre­a­sing human well-being. (4aa) In order to ensu­re a con­si­stent and high level of pro­tec­tion of public inte­rests as regards health, safe­ty and fun­da­men­tal rights, com­mon rules for all high-risk AI systems should be estab­lished. Tho­se rules should be con­si­stent with the Char­ter of fun­da­men­tal rights of the Euro­pean Uni­on (the Char­ter) and should be non-dis­cri­mi­na­to­ry and in line with the Union’s inter­na­tio­nal trade com­mit­ments. They should also take into account the Euro­pean Decla­ra­ti­on on Digi­tal Rights and Prin­ci­ples for the Digi­tal Deca­de (2023/C 23/01) and the Ethics Gui­de­lines for Trust­wor­t­hy Arti­fi­ci­al Intel­li­gence (AI) of the High-Level Expert Group on Arti­fi­ci­al Intel­li­gence. (5) A Uni­on legal frame­work lay­ing down har­mo­ni­s­ed rules on arti­fi­ci­al intel­li­gence is the­r­e­fo­re nee­ded to foster the deve­lo­p­ment, use and upt­ake of arti­fi­ci­al intel­li­gence in the inter­nal mar­ket that at the same time meets a high level of pro­tec­tion of public inte­rests, such as health and safe­ty and the pro­tec­tion of fun­da­men­tal rights, inclu­ding demo­cra­cy, rule of law and envi­ron­men­tal pro­tec­tion as reco­g­nis­ed and pro­tec­ted by Uni­on law. To achie­ve that objec­ti­ve, rules regu­la­ting the pla­cing on the mar­ket, put­ting into ser­vice and use of cer­tain AI systems should be laid down, thus ensu­ring the smooth func­tio­ning of the inter­nal mar­ket and allo­wing tho­se systems to bene­fit from the prin­ci­ple of free move­ment of goods and ser­vices. The­se rules should be clear and robust in pro­tec­ting fun­da­men­tal rights, sup­port­i­ve of new inno­va­ti­ve solu­ti­ons, enab­ling to a Euro­pean eco­sy­stem of public and pri­va­te actors crea­ting AI systems in line with Uni­on values and unlocking the poten­ti­al of the digi­tal trans­for­ma­ti­on across all regi­ons of the Uni­on. By lay­ing down tho­se rules as well as mea­su­res in sup­port of inno­va­ti­on with a par­ti­cu­lar focus on SMEs inclu­ding start­ups, this Regu­la­ti­on sup­ports the objec­ti­ve of pro­mo­ting the Euro­pean human-cen­tric approach to AI and being a glo­bal lea­der in the deve­lo­p­ment of secu­re, trust­wor­t­hy and ethi­cal arti­fi­ci­al intel­li­gence as sta­ted by the Euro­pean Council4, and it ensu­res the pro­tec­tion of ethi­cal prin­ci­ples, as spe­ci­fi­cal­ly reque­sted by the the Euro­pean Parliament5. (5a) The har­mo­ni­s­ed rules on the pla­cing on the mar­ket, put­ting into ser­vice and use of AI systems laid down in this Regu­la­ti­on should app­ly across sec­tors and, in line with its New Legis­la­ti­ve Frame­work approach, should be wit­hout pre­ju­di­ce to exi­sting Uni­on law, nota­b­ly on data pro­tec­tion, con­su­mer pro­tec­tion, fun­da­men­tal rights, employment, and pro­tec­tion of workers, and pro­duct safe­ty, to which this Regu­la­ti­on is com­ple­men­ta­ry. As a con­se­quence all rights and reme­dies pro­vi­ded for by such Uni­on law to con­su­mers, and other per­sons who may be nega­tively impac­ted by AI systems, inclu­ding as regards the com­pen­sa­ti­on of pos­si­ble dama­ges pur­su­ant to Coun­cil Direc­ti­ve 85/374/EEC of 25 July 1985 on the appro­xi­ma­ti­on of the laws, regu­la­ti­ons and admi­ni­stra­ti­ve pro­vi­si­ons of the Mem­ber Sta­tes con­cer­ning lia­bi­li­ty for defec­ti­ve pro­ducts, remain unaf­fec­ted and ful­ly appli­ca­ble. Fur­ther­mo­re, in the con­text of employment and pro­tec­tion of workers, this Regu­la­ti­on should the­r­e­fo­re not affect Uni­on law on social poli­cy and natio­nal labour law, in com­pli­ance with Uni­on law, con­cer­ning employment and working con­di­ti­ons, inclu­ding health and safe­ty at work and the rela­ti­on­ship bet­ween employers and workers. This Regu­la­ti­on should also not affect the exer­cise of fun­da­men­tal rights as reco­g­nis­ed in the Mem­ber Sta­tes and at Uni­on level, inclu­ding the right or free­dom to strike or to take other action cover­ed by the spe­ci­fic indu­stri­al rela­ti­ons systems in Mem­ber Sta­tes as well as, the right to nego­tia­te, to con­clude and enforce coll­ec­ti­ve agree­ments or to take coll­ec­ti­ve action in accordance with natio­nal law. [This Regu­la­ti­on should not affect the pro­vi­si­ons aiming to impro­ve working con­di­ti­ons in plat­form work set out in Direc­ti­ve … [COD 2021/414/EC]] On top of that, this Regu­la­ti­on aims to streng­then the effec­ti­ve­ness of such exi­sting rights and reme­dies by estab­li­shing spe­ci­fic requi­re­ments and obli­ga­ti­ons, inclu­ding in respect of trans­pa­ren­cy, tech­ni­cal docu­men­ta­ti­on and record-kee­ping of AI systems. Fur­ther­mo­re, the obli­ga­ti­ons pla­ced on various ope­ra­tors invol­ved in the AI value chain under this Regu­la­ti­on should app­ly wit­hout pre­ju­di­ce to natio­nal laws, in com­pli­ance with Uni­on law, having the effect of limi­ting the use of cer­tain AI systems whe­re such laws fall out­side the scope of this Regu­la­ti­on or pur­sue other legi­ti­ma­te public inte­rest objec­ti­ves than tho­se pur­sued by this Regu­la­ti­on. For exam­p­le, natio­nal labour law and the laws on the pro­tec­tion of minors (i.e. per­sons below the age of 18) taking into account the United Nati­ons Gene­ral Com­ment No 25 (2021) on children’s rights, inso­far as they are not spe­ci­fic to AI systems and pur­sue other legi­ti­ma­te public inte­rest objec­ti­ves, should not be affec­ted by this Regu­la­ti­on. (5aa) The fun­da­men­tal right to the pro­tec­tion of per­so­nal data is safe­guard­ed in par­ti­cu­lar by Regu­la­ti­ons (EU) 2016/679 and (EU) 2018/1725 and Direc­ti­ve 2016/680. Direc­ti­ve 2002/58/EC addi­tio­nal­ly pro­tects pri­va­te life and the con­fi­den­tia­li­ty of com­mu­ni­ca­ti­ons, inclu­ding by way of pro­vi­ding con­di­ti­ons for any per­so­nal and non-per­so­nal data sto­ring in and access from ter­mi­nal equip­ment. Tho­se Uni­on legal acts pro­vi­de the basis for sus­tainable and respon­si­ble data pro­ce­s­sing, inclu­ding whe­re data­sets include a mix of per­so­nal and non-per­so­nal data. This Regu­la­ti­on does not seek to affect the appli­ca­ti­on of exi­sting Uni­on law gover­ning the pro­ce­s­sing of per­so­nal data, inclu­ding the tasks and powers of the inde­pen­dent super­vi­so­ry aut­ho­ri­ties com­pe­tent to moni­tor com­pli­ance with tho­se instru­ments. It also does not affect the obli­ga­ti­ons of pro­vi­ders and deployers of AI systems in their role as data con­trol­lers or pro­ces­sors stem­ming from natio­nal or Uni­on law on the pro­tec­tion of per­so­nal data in so far as the design, the deve­lo­p­ment or the use of AI systems invol­ves the pro­ce­s­sing of per­so­nal data. It is also appro­pria­te to cla­ri­fy that data sub­jects con­ti­n­ue to enjoy all the rights and gua­ran­tees award­ed to them by such Uni­on law, inclu­ding the rights rela­ted to sole­ly auto­ma­ted indi­vi­du­al decis­i­on-making, inclu­ding pro­fil­ing. Har­mo­ni­s­ed rules for the pla­cing on the mar­ket, the put­ting into ser­vice and the use of AI systems estab­lished under this Regu­la­ti­on should faci­li­ta­te the effec­ti­ve imple­men­ta­ti­on and enable the exer­cise of the data sub­jects’ rights and other reme­dies gua­ran­teed under Uni­on law on the pro­tec­tion of per­so­nal data and of other fun­da­men­tal rights. (5ab) This Regu­la­ti­on should be wit­hout pre­ju­di­ce to the pro­vi­si­ons regar­ding the lia­bi­li­ty of inter­me­dia­ry ser­vice pro­vi­ders set out in Direc­ti­ve 2000/31/EC of the Euro­pean Par­lia­ment and of the Coun­cil [as amen­ded by the Digi­tal Ser­vices Act]. (6) The noti­on of AI system in this Regu­la­ti­on should be cle­ar­ly defi­ned and clo­se­ly ali­gned with the work of inter­na­tio­nal orga­ni­sa­ti­ons working on arti­fi­ci­al intel­li­gence to ensu­re legal cer­tain­ty, faci­li­ta­te inter­na­tio­nal con­ver­gence and wide accep­tance, while pro­vi­ding the fle­xi­bi­li­ty to accom­mo­da­te the rapid tech­no­lo­gi­cal deve­lo­p­ments in this field. Moreo­ver, it should be based on key cha­rac­te­ri­stics of arti­fi­ci­al intel­li­gence systems, that distin­gu­ish it from simp­ler tra­di­tio­nal soft­ware systems or pro­gramming approa­ches and should not cover systems that are based on the rules defi­ned sole­ly by natu­ral per­sons to auto­ma­ti­cal­ly exe­cu­te ope­ra­ti­ons. A key cha­rac­te­ri­stic of AI systems is their capa­bi­li­ty to infer. This infe­rence refers to the pro­cess of obtai­ning the out­puts, such as pre­dic­tions, con­tent, recom­men­da­ti­ons, or decis­i­ons, which can influence phy­si­cal and vir­tu­al envi­ron­ments and to a capa­bi­li­ty of AI systems to deri­ve models and/or algo­rith­ms from inputs/data. The tech­ni­ques that enable infe­rence while buil­ding an AI system include machi­ne lear­ning approa­ches that learn from data how to achie­ve cer­tain objec­ti­ves; and logic- and know­ledge-based approa­ches that infer from encoded know­ledge or sym­bo­lic repre­sen­ta­ti­on of the task to be sol­ved. The capa­ci­ty of an AI system to infer goes bey­ond basic data pro­ce­s­sing, enable lear­ning, rea­so­ning or model­ling. The term “machi­ne-based” refers to the fact that AI systems run on machi­nes. The refe­rence to expli­cit or impli­cit objec­ti­ves unders­cores that AI systems can ope­ra­te accor­ding to expli­cit defi­ned objec­ti­ves or to impli­cit objec­ti­ves. The objec­ti­ves of the AI system may be dif­fe­rent from the inten­ded pur­po­se of the AI system in a spe­ci­fic con­text. For the pur­po­ses of this Regu­la­ti­on, envi­ron­ments should be under­s­tood as the con­texts in which the AI systems ope­ra­te, whe­re­as out­puts gene­ra­ted by the AI system, reflect dif­fe­rent func­tions per­for­med by AI systems and include pre­dic­tions, con­tent, recom­men­da­ti­ons or decis­i­ons. AI systems are desi­gned to ope­ra­te with vary­ing levels of auto­no­my, mea­ning that they have some degree of inde­pen­dence of actions from human invol­vement and of capa­bi­li­ties to ope­ra­te wit­hout human inter­ven­ti­on. The adap­ti­ve­ness that an AI system could exhi­bit after deployment, refers to self-lear­ning capa­bi­li­ties, allo­wing the system to chan­ge while in use. AI systems can be used on a stand-alo­ne basis or as a com­po­nent of a pro­duct, irre­spec­ti­ve of whe­ther the system is phy­si­cal­ly inte­gra­ted into the pro­duct (embedded) or ser­ve the func­tion­a­li­ty of the pro­duct wit­hout being inte­gra­ted the­r­ein (non-embedded). (6a) The noti­on of ‘deployer’ refer­red to in this Regu­la­ti­on should be inter­pre­ted as any natu­ral or legal per­son, inclu­ding a public aut­ho­ri­ty, agen­cy or other body, using an AI system under its aut­ho­ri­ty, except whe­re the AI system is used in the cour­se of a per­so­nal non- pro­fes­sio­nal acti­vi­ty. Depen­ding on the type of AI system, the use of the system may affect per­sons other than the deployer. (7) The noti­on of bio­me­tric data used in this Regu­la­ti­on should be inter­pre­ted in light of the noti­on of bio­me­tric data as defi­ned in Artic­le 4(14) of Regu­la­ti­on (EU) 2016/679 of the Euro­pean Par­lia­ment and of the Coun­cil 6, Artic­le 3(18) of Regu­la­ti­on (EU) 2018/1725 of the Euro­pean Par­lia­ment and of the Coun­cil 7 and Artic­le 3(13) of Direc­ti­ve (EU) 2016/680 of the Euro­pean Par­lia­ment and of the Coun­cil 8.Biometric data can allow for the authen­ti­ca­ti­on, iden­ti­fi­ca­ti­on or cate­go­ri­sa­ti­on of natu­ral per­sons and for the reco­gni­ti­on of emo­ti­ons of natu­ral per­sons. (7a) The noti­on of bio­me­tric iden­ti­fi­ca­ti­on as used in this Regu­la­ti­on should be defi­ned as the auto­ma­ted reco­gni­ti­on of phy­si­cal, phy­sio­lo­gi­cal and beha­viou­ral human fea­tures such as the face, eye move­ment, body shape, voice, pro­so­dy, gait, postu­re, heart rate, blood pres­su­re, odour, keystrokes cha­rac­te­ri­stics, for the pur­po­se of estab­li­shing an individual’s iden­ti­ty by com­pa­ring bio­me­tric data of that indi­vi­du­al to stored bio­me­tric data of indi­vi­du­als in a refe­rence data­ba­se, irre­spec­ti­ve of whe­ther the indi­vi­du­al has given its con­sent or not. This exclu­des AI systems inten­ded to be used for bio­me­tric veri­fi­ca­ti­on, which inclu­des authen­ti­ca­ti­on, who­se sole pur­po­se is to con­firm that a spe­ci­fic natu­ral per­son is the per­son he or she claims to be and to con­firm the iden­ti­ty of a natu­ral per­son for the sole pur­po­se of having access to a ser­vice, unlocking a device or having secu­ri­ty access to pre­mi­ses. (7b) The noti­on of bio­me­tric cate­go­ri­sa­ti­on as used in this Regu­la­ti­on should be defi­ned as assig­ning natu­ral per­sons to spe­ci­fic cate­go­ries on the basis of their bio­me­tric data. Such spe­ci­fic cate­go­ries can rela­te to aspects such as sex, age, hair colour, eye colour, tat­toos, beha­viou­ral or per­so­na­li­ty traits, lan­guage, reli­gi­on, mem­ber­ship of a natio­nal mino­ri­ty, sexu­al or poli­ti­cal ori­en­ta­ti­on. This does not include bio­me­tric cate­go­rizati­on systems that are a purely ancil­la­ry fea­ture intrin­si­cal­ly lin­ked to ano­ther com­mer­cial ser­vice mea­ning that the fea­ture can­not, for objec­ti­ve tech­ni­cal rea­sons, be used wit­hout the prin­ci­pal ser­vice and the inte­gra­ti­on of that fea­ture or func­tion­a­li­ty is not a means to cir­cum­vent the appli­ca­bi­li­ty of the rules of this Regu­la­ti­on. For exam­p­le, fil­ters cate­go­ri­zing facial or body fea­tures used on online mar­ket­places could con­sti­tu­te such an ancil­la­ry fea­ture as they can only be used in rela­ti­on to the prin­ci­pal ser­vice which con­sists in sel­ling a pro­duct by allo­wing the con­su­mer to pre­view the dis­play of the pro­duct on him or hers­elf and help the con­su­mer to make a purcha­se decis­i­on. Fil­ters used on online social net­work ser­vices which cate­go­ri­se facial or body fea­tures to allow users to add or modi­fy pic­tures or vide­os could also be con­side­red as ancil­la­ry fea­ture as such fil­ter can­not be used wit­hout the prin­ci­pal ser­vice of the social net­work ser­vices con­si­sting in the sha­ring of con­tent online. (8) The noti­on of remo­te bio­me­tric iden­ti­fi­ca­ti­on system as used in this Regu­la­ti­on should be defi­ned func­tion­al­ly, as an AI system inten­ded for the iden­ti­fi­ca­ti­on of natu­ral per­sons wit­hout their acti­ve invol­vement, typi­cal­ly at a distance, through the com­pa­ri­son of a person’s bio­me­tric data with the bio­me­tric data con­tai­ned in a refe­rence data­ba­se, irre­spec­tively of the par­ti­cu­lar tech­no­lo­gy, pro­ce­s­ses or types of bio­me­tric data used. Such remo­te bio­me­tric iden­ti­fi­ca­ti­on systems are typi­cal­ly used to per­cei­ve mul­ti­ple per­sons or their beha­viour simul­ta­neous­ly in order to faci­li­ta­te signi­fi­cant­ly the iden­ti­fi­ca­ti­on of natu­ral per­sons wit­hout their acti­ve invol­vement. This exclu­des AI systems inten­ded to be used for bio­me­tric veri­fi­ca­ti­on, which inclu­des authen­ti­ca­ti­on, who­se sole pur­po­se is to con­firm that a spe­ci­fic natu­ral per­son is the per­son he or she claims to be and to con­firm the iden­ti­ty of a natu­ral per­son for the sole pur­po­se of having access to a ser­vice, unlocking a device or having secu­ri­ty access to pre­mi­ses. This exclu­si­on is justi­fi­ed by the fact that such systems are likely to have a minor impact on fun­da­men­tal rights of natu­ral per­sons com­pared to the remo­te bio­me­tric iden­ti­fi­ca­ti­on systems which may be used for the pro­ce­s­sing of the bio­me­tric data of a lar­ge num­ber of per­sons wit­hout their acti­ve invol­vement. In the case of ‘real-time’ systems, the cap­tu­ring of the bio­me­tric data, the com­pa­ri­son and the iden­ti­fi­ca­ti­on occur all instanta­neous­ly, near-instanta­neous­ly or in any event wit­hout a signi­fi­cant delay. In this regard, the­re should be no scope for cir­cum­ven­ting the rules of this Regu­la­ti­on on the ‘real-time’ use of the AI systems in que­sti­on by pro­vi­ding for minor delays. ‘Real-time’ systems invol­ve the use of ‘live’ or ‘near-‘live’ mate­ri­al, such as video foota­ge, gene­ra­ted by a came­ra or other device with simi­lar func­tion­a­li­ty. In the case of ‘post’ systems, in con­trast, the bio­me­tric data have alre­a­dy been cap­tu­red and the com­pa­ri­son and iden­ti­fi­ca­ti­on occur only after a signi­fi­cant delay. This invol­ves mate­ri­al, such as pic­tures or video foota­ge gene­ra­ted by clo­sed cir­cuit tele­vi­si­on came­ras or pri­va­te devices, which has been gene­ra­ted befo­re the use of the system in respect of the natu­ral per­sons con­cer­ned. (8a) The noti­on of emo­ti­on reco­gni­ti­on system for the pur­po­se of this regu­la­ti­on should be defi­ned as an AI system for the pur­po­se of iden­ti­fy­ing or infer­ring emo­ti­ons or inten­ti­ons of natu­ral per­sons on the basis of their bio­me­tric data. This refers to emo­ti­ons or inten­ti­ons such as hap­pi­ness, sad­ness, anger, sur­pri­se, dis­gust, embar­rass­ment, exci­te­ment, shame, con­tempt, satis­fac­tion and amu­se­ment. It does not include phy­si­cal sta­tes, such as pain or fati­gue. It does not refer for exam­p­le to systems used in detec­ting the sta­te of fati­gue of pro­fes­sio­nal pilots or dri­vers for the pur­po­se of pre­ven­ting acci­dents. It does also not include the mere detec­tion of rea­di­ly appa­rent expres­si­ons, ges­tu­res or move­ments, unless they are used for iden­ti­fy­ing or infer­ring emo­ti­ons. The­se expres­si­ons can be basic facial expres­si­ons such as a frown or a smi­le, or ges­tu­res such as the move­ment of hands, arms or head, or cha­rac­te­ri­stics of a person’s voice, for exam­p­le a rai­sed voice or whis­pe­ring. (9) For the pur­po­ses of this Regu­la­ti­on the noti­on of publicly acce­s­si­ble space should be under­s­tood as refer­ring to any phy­si­cal place that is acce­s­si­ble to an unde­ter­mi­ned num­ber of natu­ral per­sons, and irre­spec­ti­ve of whe­ther the place in que­sti­on is pri­va­te­ly or publicly owned and irre­spec­ti­ve of the acti­vi­ty for which the place may be used, such as com­mer­ce (for instance, shops, restau­rants, cafés), ser­vices (for instance, banks, pro­fes­sio­nal acti­vi­ties, hos­pi­ta­li­ty), sport (for instance, swim­ming pools, gyms, sta­di­ums), trans­port (for instance, bus, metro and rail­way sta­ti­ons, air­ports, means of trans­port ), enter­tain­ment (for instance, cine­mas, thea­tres, muse­ums, con­cert and con­fe­rence halls) lei­su­re or other­wi­se (for instance, public roads and squa­res, parks, forests, play­grounds). A place should be clas­si­fi­ed as publicly acce­s­si­ble also if, regard­less of poten­ti­al capa­ci­ty or secu­ri­ty rest­ric­tions, access is sub­ject to cer­tain pre­de­ter­mi­ned con­di­ti­ons, which can be ful­fil­led by an unde­ter­mi­ned num­ber of per­sons, such as purcha­se of a ticket or tit­le of trans­port, pri­or regi­stra­ti­on or having a cer­tain age. By con­trast, a place should not be con­side­red publicly acce­s­si­ble if access is limi­t­ed to spe­ci­fic and defi­ned natu­ral per­sons through eit­her Uni­on or natio­nal law direct­ly rela­ted to public safe­ty or secu­ri­ty or through the clear mani­fe­sta­ti­on of will by the per­son having the rele­vant aut­ho­ri­ty on the place. The fac­tu­al pos­si­bi­li­ty of access alo­ne (e.g. an unlocked door, an open gate in a fence) does not imply that the place is publicly acce­s­si­ble in the pre­sence of indi­ca­ti­ons or cir­cum­stances sug­ge­st­ing the con­tra­ry (e.g. signs pro­hi­bi­ting or rest­ric­ting access). Com­pa­ny and fac­to­ry pre­mi­ses as well as offices and work­places that are inten­ded to be acce­s­sed only by rele­vant employees and ser­vice pro­vi­ders are places that are not publicly acce­s­si­ble. Publicly acce­s­si­ble spaces should not include pri­sons or bor­der con­trol. Some other are­as may be com­po­sed of both not publicly acce­s­si­ble and publicly acce­s­si­ble are­as, such as the hall­way of a pri­va­te resi­den­ti­al buil­ding neces­sa­ry to access a doctor’s office or an air­port. Online spaces are not cover­ed eit­her, as they are not phy­si­cal spaces. Whe­ther a given space is acce­s­si­ble to the public should howe­ver be deter­mi­ned on a case-by-case basis, having regard to the spe­ci­fi­ci­ties of the indi­vi­du­al situa­ti­on at hand. (9b) In order to obtain the grea­test bene­fits from AI systems while pro­tec­ting fun­da­men­tal rights, health and safe­ty and to enable demo­cra­tic con­trol, AI liter­a­cy should equip pro­vi­ders, deployers and affec­ted per­sons with the neces­sa­ry noti­ons to make infor­med decis­i­ons regar­ding AI systems. The­se noti­ons may vary with regard to the rele­vant con­text and can include under­stan­ding the cor­rect appli­ca­ti­on of tech­ni­cal ele­ments during the AI system’s deve­lo­p­ment pha­se, the mea­su­res to be applied during its use, the sui­ta­ble ways in which to inter­pret the AI system’s out­put, and, in the case of affec­ted per­sons, the know­ledge neces­sa­ry to under­stand how decis­i­ons taken with the assi­stance of AI will impact them. In the con­text of the appli­ca­ti­on this Regu­la­ti­on, AI liter­a­cy should pro­vi­de all rele­vant actors in the AI value chain with the insights requi­red to ensu­re the appro­pria­te com­pli­ance and its cor­rect enforce­ment. Fur­ther­mo­re, the wide imple­men­ta­ti­on of AI liter­a­cy mea­su­res and the intro­duc­tion of appro­pria­te fol­low-up actions could con­tri­bu­te to impro­ving working con­di­ti­ons and ulti­m­ate­ly sus­tain the con­so­li­da­ti­on, and inno­va­ti­on path of trust­wor­t­hy AI in the Uni­on. The Euro­pean Arti­fi­ci­al Intel­li­gence Board should sup­port the Com­mis­si­on, to pro­mo­te AI liter­a­cy tools, public awa­re­ness and under­stan­ding of the bene­fits, risks, safe­guards, rights and obli­ga­ti­ons in rela­ti­on to the use of AI systems. In coope­ra­ti­on with the rele­vant stake­hol­ders, the Com­mis­si­on and the Mem­ber Sta­tes should faci­li­ta­te the dra­wing up of vol­un­t­a­ry codes of con­duct to advan­ce AI liter­a­cy among per­sons deal­ing with the deve­lo­p­ment, ope­ra­ti­on and use of AI. (10) In order to ensu­re a level play­ing field and an effec­ti­ve pro­tec­tion of rights and free­doms of indi­vi­du­als across the Uni­on, the rules estab­lished by this Regu­la­ti­on should app­ly to pro­vi­ders of AI systems in a non-dis­cri­mi­na­to­ry man­ner, irre­spec­ti­ve of whe­ther they are estab­lished within the Uni­on or in a third coun­try, and to deployers of AI systems estab­lished within the Uni­on. (11) In light of their digi­tal natu­re, cer­tain AI systems should fall within the scope of this Regu­la­ti­on even when they are neither pla­ced on the mar­ket, nor put into ser­vice, nor used in the Uni­on. This is the case for exam­p­le of an ope­ra­tor estab­lished in the Uni­on that con­tracts cer­tain ser­vices to an ope­ra­tor estab­lished out­side the Uni­on in rela­ti­on to an acti­vi­ty to be per­for­med by an AI system that would qua­li­fy as high-risk. In tho­se cir­cum­stances, the AI system used by the ope­ra­tor out­side the Uni­on could pro­cess data lawful­ly coll­ec­ted in and trans­fer­red from the Uni­on, and pro­vi­de to the con­trac­ting ope­ra­tor in the Uni­on the out­put of that AI system resul­ting from that pro­ce­s­sing, wit­hout that AI system being pla­ced on the mar­ket, put into ser­vice or used in the Uni­on. To pre­vent the cir­cum­ven­ti­on of this Regu­la­ti­on and to ensu­re an effec­ti­ve pro­tec­tion of natu­ral per­sons loca­ted in the Uni­on, this Regu­la­ti­on should also app­ly to pro­vi­ders and deployers of AI systems that are estab­lished in a third coun­try, to the ext­ent the out­put pro­du­ced by tho­se systems is inten­ded to be used in the Uni­on. None­thel­ess, to take into account exi­sting arran­ge­ments and spe­cial needs for future coope­ra­ti­on with for­eign part­ners with whom infor­ma­ti­on and evi­dence is exch­an­ged, this Regu­la­ti­on should not app­ly to public aut­ho­ri­ties of a third coun­try and inter­na­tio­nal orga­ni­sa­ti­ons when acting in the frame­work of coope­ra­ti­on or inter­na­tio­nal agree­ments con­clu­ded at natio­nal or Euro­pean level for law enforce­ment and judi­cial coope­ra­ti­on with the Uni­on or with its Mem­ber Sta­tes, under the con­di­ti­on that this third coun­try or inter­na­tio­nal orga­ni­sa­ti­ons pro­vi­de ade­qua­te safe­guards with respect to the pro­tec­tion of fun­da­men­tal rights and free­doms of indi­vi­du­als. Whe­re rele­vant, this may also cover acti­vi­ties of enti­ties ent­ru­sted by the third count­ries to car­ry out spe­ci­fic tasks in sup­port of such law enforce­ment and judi­cial coope­ra­ti­on. Such frame­work for coope­ra­ti­on or agree­ments have been estab­lished bila­te­ral­ly bet­ween Mem­ber Sta­tes and third count­ries or bet­ween the Euro­pean Uni­on, Euro­pol and other EU agen­ci­es and third count­ries and inter­na­tio­nal orga­ni­sa­ti­ons. The aut­ho­ri­ties com­pe­tent for super­vi­si­on of the law enforce­ment and judi­cial aut­ho­ri­ties under the AI Act should assess whe­ther the­se frame­works for coope­ra­ti­on or inter­na­tio­nal agree­ments include ade­qua­te safe­guards with respect to the pro­tec­tion of fun­da­men­tal rights and free­doms of indi­vi­du­als. Reci­pi­ent Mem­ber Sta­tes aut­ho­ri­ties and Uni­on insti­tu­ti­ons, offices and bodies making use of such out­puts in the Uni­on remain accoun­ta­ble to ensu­re their use com­plies with Uni­on law. When tho­se inter­na­tio­nal agree­ments are revi­sed or new ones are con­clu­ded in the future, the con­trac­ting par­ties should under­ta­ke the utmost effort to ali­gn tho­se agree­ments with the requi­re­ments of this Regu­la­ti­on. (12) This Regu­la­ti­on should also app­ly to Uni­on insti­tu­ti­ons, offices, bodies and agen­ci­es when acting as a pro­vi­der or deployer of an AI system. (12a) If and inso­far AI systems are pla­ced on the mar­ket, put into ser­vice, or used with or wit­hout modi­fi­ca­ti­on of such systems for mili­ta­ry, defence or natio­nal secu­ri­ty pur­po­ses, tho­se should be exclu­ded from the scope of this Regu­la­ti­on regard­less of which type of enti­ty is car­ry­ing out tho­se acti­vi­ties, such as whe­ther it is a public or pri­va­te enti­ty. As regards mili­ta­ry and defence pur­po­ses, such exclu­si­on is justi­fi­ed both by Artic­le 4(2) TEU and by the spe­ci­fi­ci­ties of the Mem­ber Sta­tes’ and the com­mon Uni­on defence poli­cy cover­ed by Chap­ter 2 of Tit­le V of the Trea­ty on Euro­pean Uni­on (TEU) that are sub­ject to public inter­na­tio­nal law, which is the­r­e­fo­re the more appro­pria­te legal frame­work for the regu­la­ti­on of AI systems in the con­text of the use of lethal force and other AI systems in the con­text of mili­ta­ry and defence acti­vi­ties. As regards natio­nal secu­ri­ty pur­po­ses, the exclu­si­on is justi­fi­ed both by the fact that natio­nal secu­ri­ty remains the sole respon­si­bi­li­ty of Mem­ber Sta­tes in accordance with Artic­le 4(2) TEU and by the spe­ci­fic natu­re and ope­ra­tio­nal needs of natio­nal secu­ri­ty acti­vi­ties and spe­ci­fic natio­nal rules appli­ca­ble to tho­se acti­vi­ties. None­thel­ess, if an AI system deve­lo­ped, pla­ced on the mar­ket, put into ser­vice or used for mili­ta­ry, defence or natio­nal secu­ri­ty pur­po­ses is used out­side tho­se tem­po­r­a­ri­ly or per­ma­nent­ly for other pur­po­ses (for exam­p­le, civi­li­an or huma­ni­ta­ri­an pur­po­ses, law enforce­ment or public secu­ri­ty pur­po­ses), such a system would fall within the scope of this Regu­la­ti­on. In that case, the enti­ty using the system for other than mili­ta­ry, defence or natio­nal secu­ri­ty pur­po­ses should ensu­re com­pli­ance of the system with this Regu­la­ti­on, unless the system is alre­a­dy com­pli­ant with this Regu­la­ti­on. AI systems pla­ced on the mar­ket or put into ser­vice for an exclu­ded (i.e. mili­ta­ry, defence or natio­nal secu­ri­ty) and one or more non exclu­ded pur­po­ses (e.g. civi­li­an pur­po­ses, law enforce­ment, etc.), fall within the scope of this Regu­la­ti­on and pro­vi­ders of tho­se systems should ensu­re com­pli­ance with this Regu­la­ti­on. In tho­se cases, the fact that an AI system may fall within the scope of this Regu­la­ti­on should not affect the pos­si­bi­li­ty of enti­ties car­ry­ing out natio­nal secu­ri­ty, defence and mili­ta­ry acti­vi­ties, regard­less of the type of enti­ty car­ry­ing out tho­se acti­vi­ties, to use AI systems for natio­nal secu­ri­ty, mili­ta­ry and defence pur­po­ses, the use of which is exclu­ded from the scope of this Regu­la­ti­on. An AI system pla­ced on the mar­ket for civi­li­an or law enforce­ment pur­po­ses which is used with or wit­hout modi­fi­ca­ti­on for mili­ta­ry, defence or natio­nal secu­ri­ty pur­po­ses should not fall within the scope of this Regu­la­ti­on, regard­less of the type of enti­ty car­ry­ing out tho­se acti­vi­ties. (12c) This Regu­la­ti­on should sup­port inno­va­ti­on, respect free­dom of sci­ence, and should not under­mi­ne rese­arch and deve­lo­p­ment acti­vi­ty. It is the­r­e­fo­re neces­sa­ry to exclude from its scope AI systems and models spe­ci­fi­cal­ly deve­lo­ped and put into ser­vice for the sole pur­po­se of sci­en­ti­fic rese­arch and deve­lo­p­ment. Moreo­ver, it is neces­sa­ry to ensu­re that the Regu­la­ti­on does not other­wi­se affect sci­en­ti­fic rese­arch and deve­lo­p­ment acti­vi­ty on AI systems or models pri­or to being pla­ced on the mar­ket or put into ser­vice. As regards pro­duct ori­en­ted rese­arch, test­ing and deve­lo­p­ment acti­vi­ty regar­ding AI systems or models, the pro­vi­si­ons of this Regu­la­ti­on should also not app­ly pri­or to the­se systems and models being put into ser­vice or pla­ced on the mar­ket. This is wit­hout pre­ju­di­ce to the obli­ga­ti­on to com­ply with this Regu­la­ti­on when an AI system fal­ling into the scope of this Regu­la­ti­on is pla­ced on the mar­ket or put into ser­vice as a result of such rese­arch and deve­lo­p­ment acti­vi­ty and to the appli­ca­ti­on of pro­vi­si­ons on regu­la­to­ry sand­bo­xes and test­ing in real world con­di­ti­ons. Fur­ther­mo­re, wit­hout pre­ju­di­ce to the fore­go­ing regar­ding AI systems spe­ci­fi­cal­ly deve­lo­ped and put into ser­vice for the sole pur­po­se of sci­en­ti­fic rese­arch and deve­lo­p­ment, any other AI system that may be used for the con­duct of any rese­arch and deve­lo­p­ment acti­vi­ty should remain sub­ject to the pro­vi­si­ons of this Regu­la­ti­on. Under all cir­cum­stances, any rese­arch and deve­lo­p­ment acti­vi­ty should be car­ri­ed out in accordance with reco­g­nis­ed ethi­cal and pro­fes­sio­nal stan­dards for sci­en­ti­fic rese­arch and should be con­duc­ted accor­ding to appli­ca­ble Uni­on law. (14) In order to intro­du­ce a pro­por­tio­na­te and effec­ti­ve set of bin­ding rules for AI systems, a cle­ar­ly defi­ned risk-based approach should be fol­lo­wed. That approach should tail­or the type and con­tent of such rules to the inten­si­ty and scope of the risks that AI systems can gene­ra­te. It is the­r­e­fo­re neces­sa­ry to pro­hi­bit cer­tain unac­cep­ta­ble arti­fi­ci­al intel­li­gence prac­ti­ces, to lay down requi­re­ments for high-risk AI systems and obli­ga­ti­ons for the rele­vant ope­ra­tors, and to lay down trans­pa­ren­cy obli­ga­ti­ons for cer­tain AI systems. (14a) While the risk-based approach is the basis for a pro­por­tio­na­te and effec­ti­ve set of bin­ding rules, it is important to recall the 2019 Ethics Gui­de­lines for Trust­wor­t­hy AI deve­lo­ped by the inde­pen­dent High-Level Expert Group on AI (HLEG) appoin­ted by the Com­mis­si­on. In tho­se Gui­de­lines the HLEG deve­lo­ped seven non-bin­ding ethi­cal prin­ci­ples for AI which should help ensu­re that AI is trust­wor­t­hy and ethi­cal­ly sound. The seven prin­ci­ples include: human agen­cy and over­sight; tech­ni­cal robust­ness and safe­ty; pri­va­cy and data gover­nan­ce; trans­pa­ren­cy; diver­si­ty, non-dis­cri­mi­na­ti­on and fair­ness; socie­tal and envi­ron­men­tal well-being and accoun­ta­bi­li­ty. Wit­hout pre­ju­di­ce to the legal­ly bin­ding requi­re­ments of this Regu­la­ti­on and any other appli­ca­ble Uni­on law, the­se Gui­de­lines con­tri­bu­te to the design of a coher­ent, trust­wor­t­hy and human-cen­tric Arti­fi­ci­al Intel­li­gence, in line with the Char­ter and with the values on which the Uni­on is foun­ded. Accor­ding to the Gui­de­lines of HLEG, human agen­cy and over­sight means that AI systems are deve­lo­ped and used as a tool that ser­ves peo­p­le, respects human dignity and per­so­nal auto­no­my, and that is func­tio­ning in a way that can be appro­pria­te­ly con­trol­led and over­seen by humans. Tech­ni­cal robust­ness and safe­ty means that AI systems are deve­lo­ped and used in a way that allo­ws robust­ness in case of pro­blems and resi­li­ence against attempts to alter the use or per­for­mance of the AI system so as to allow unlawful use by third par­ties, and mini­mi­se unin­ten­ded harm. Pri­va­cy and data gover­nan­ce means that AI systems are deve­lo­ped and used in com­pli­ance with exi­sting pri­va­cy and data pro­tec­tion rules, while pro­ce­s­sing data that meets high stan­dards in terms of qua­li­ty and inte­gri­ty. Trans­pa­ren­cy means that AI systems are deve­lo­ped and used in a way that allo­ws appro­pria­te tracea­bi­li­ty and explaina­bi­li­ty, while making humans awa­re that they com­mu­ni­ca­te or inter­act with an AI system, as well as duly informing deployers of the capa­bi­li­ties and limi­ta­ti­ons of that AI system and affec­ted per­sons about their rights. Diver­si­ty, non-dis­cri­mi­na­ti­on and fair­ness means that AI systems are deve­lo­ped and used in a way that inclu­des diver­se actors and pro­mo­tes equal access, gen­der equa­li­ty and cul­tu­ral diver­si­ty, while avo­i­ding dis­cri­mi­na­to­ry impacts and unfair bia­ses that are pro­hi­bi­ted by Uni­on or natio­nal law. Social and envi­ron­men­tal well-being means that AI systems are deve­lo­ped and used in a sus­tainable and envi­ron­men­tal­ly fri­end­ly man­ner as well as in a way to bene­fit all human beings, while moni­to­ring and asses­sing the long-term impacts on the indi­vi­du­al, socie­ty and demo­cra­cy. The appli­ca­ti­on of the­se prin­ci­ples should be trans­la­ted, when pos­si­ble, in the design and use of AI models. They should in any case ser­ve as a basis for the draf­ting of codes of con­duct under this Regu­la­ti­on. All stake­hol­ders, inclu­ding indu­stry, aca­de­mia, civil socie­ty and stan­dar­di­sati­on orga­ni­sa­ti­ons, are encou­ra­ged to take into account as appro­pria­te the ethi­cal prin­ci­ples for the deve­lo­p­ment of vol­un­t­a­ry best prac­ti­ces and stan­dards. (15) Asi­de from the many bene­fi­ci­al uses of arti­fi­ci­al intel­li­gence, that tech­no­lo­gy can also be misu­s­ed and pro­vi­de novel and powerful tools for mani­pu­la­ti­ve, explo­ita­ti­ve and social con­trol prac­ti­ces. Such prac­ti­ces are par­ti­cu­lar­ly harmful and abu­si­ve and should be pro­hi­bi­ted becau­se they con­tra­dict Uni­on values of respect for human dignity, free­dom, equa­li­ty, demo­cra­cy and the rule of law and Uni­on fun­da­men­tal rights, inclu­ding the right to non-dis­cri­mi­na­ti­on, data pro­tec­tion and pri­va­cy and the rights of the child. (16) AI-enab­led mani­pu­la­ti­ve tech­ni­ques can be used to per­sua­de per­sons to enga­ge in unwan­ted beha­viours, or to decei­ve them by nud­ging them into decis­i­ons in a way that sub­verts and impairs their auto­no­my, decis­i­on-making and free choices. The pla­cing on the mar­ket, put­ting into ser­vice or use of cer­tain AI systems with the objec­ti­ve to or the effect of mate­ri­al­ly dis­tort­ing human beha­viour, wher­eby signi­fi­cant harms, in par­ti­cu­lar having suf­fi­ci­ent­ly important adver­se impacts on phy­si­cal, psy­cho­lo­gi­cal health or finan­cial inte­rests are likely to occur, are par­ti­cu­lar­ly dan­ge­rous and should the­r­e­fo­re be for­bidden. Such AI systems deploy sub­li­mi­nal com­pon­ents such as audio, image, video sti­mu­li that per­sons can­not per­cei­ve as tho­se sti­mu­li are bey­ond human per­cep­ti­on or other mani­pu­la­ti­ve or decep­ti­ve tech­ni­ques that sub­vert or impair person’s auto­no­my, decis­i­on- making or free choices in ways that peo­p­le are not con­scious­ly awa­re of, or even if awa­re they are still decei­ved or not able to con­trol or resist. This could be for exam­p­le, faci­li­ta­ted by machi­ne-brain inter­faces or vir­tu­al rea­li­ty as they allow for a hig­her degree of con­trol of what sti­mu­li are pre­sen­ted to per­sons, inso­far as they may be mate­ri­al­ly dis­tort­ing their beha­viour in a signi­fi­cant­ly harmful man­ner. In addi­ti­on, AI systems may also other­wi­se exploit vul­nerabi­li­ties of a per­son or a spe­ci­fic group of per­sons due to their age, disa­bi­li­ty within the mea­ning of Direc­ti­ve (EU) 2019/882, or a spe­ci­fic social or eco­no­mic situa­ti­on that is likely to make tho­se per­sons more vul­nerable to explo­ita­ti­on such as per­sons living in extre­me pover­ty, eth­nic or reli­gious mino­ri­ties. Such AI systems can be pla­ced on the mar­ket, put into ser­vice or used with the objec­ti­ve to or the effect of mate­ri­al­ly dis­tort­ing the beha­viour of a per­son and in a man­ner that cau­ses or is rea­son­ab­ly likely to cau­se signi­fi­cant harm to that or ano­ther per­son or groups of per­sons, inclu­ding harms that may be accu­mu­la­ted over time and should the­r­e­fo­re be pro­hi­bi­ted. The inten­ti­on to distort the beha­viour may not be pre­su­med if the dis­tor­ti­on results from fac­tors exter­nal to the AI system which are out­side of the con­trol of the pro­vi­der or the deployer, mea­ning fac­tors that may not be rea­son­ab­ly fore­seen and miti­ga­ted by the pro­vi­der or the deployer of the AI system. In any case, it is not neces­sa­ry for the pro­vi­der or the deployer to have the inten­ti­on to cau­se signi­fi­cant harm, as long as such harm results from the mani­pu­la­ti­ve or explo­ita­ti­ve AI-enab­led prac­ti­ces. The pro­hi­bi­ti­ons for such AI prac­ti­ces are com­ple­men­ta­ry to the pro­vi­si­ons con­tai­ned in Direc­ti­ve 2005/29/EC, nota­b­ly unfair com­mer­cial prac­ti­ces lea­ding to eco­no­mic or finan­cial harms to con­su­mers are pro­hi­bi­ted under all cir­cum­stances, irre­spec­ti­ve of whe­ther they are put in place through AI systems or other­wi­se. The pro­hi­bi­ti­ons of mani­pu­la­ti­ve and explo­ita­ti­ve prac­ti­ces in this Regu­la­ti­on should not affect lawful prac­ti­ces in the con­text of medi­cal tre­at­ment such as psy­cho­lo­gi­cal tre­at­ment of a men­tal dise­a­se or phy­si­cal reha­bi­li­ta­ti­on, when tho­se prac­ti­ces are car­ri­ed out in accordance with the appli­ca­ble legis­la­ti­on and medi­cal stan­dards, for exam­p­le expli­cit con­sent of the indi­vi­du­als or their legal repre­sen­ta­ti­ves. In addi­ti­on, com­mon and legi­ti­ma­te com­mer­cial prac­ti­ces, for exam­p­le in the field of adver­ti­sing, that are in com­pli­ance with the appli­ca­ble law should not in them­sel­ves be regard­ed as con­sti­tu­ting harmful mani­pu­la­ti­ve AI prac­ti­ces. (16a) Bio­me­tric cate­go­ri­sa­ti­on systems that are based on indi­vi­du­als’ bio­me­tric data, such as an indi­vi­du­al person’s face or fin­ger­print, to dedu­ce or infer an indi­vi­du­als’ poli­ti­cal opi­ni­ons, trade uni­on mem­ber­ship, reli­gious or phi­lo­so­phi­cal beliefs, race, sex life or sexu­al ori­en­ta­ti­on should be pro­hi­bi­ted. This pro­hi­bi­ti­on does not cover the lawful label­ling, fil­te­ring or cate­go­ri­sa­ti­on of bio­me­tric data­sets acqui­red in line with Uni­on or natio­nal law accor­ding to bio­me­tric data, such as the sort­ing of images accor­ding to hair colour or eye colour, which can for exam­p­le be used in the area of law enforce­ment. (17) AI systems pro­vi­ding social scoring of natu­ral per­sons by public or pri­va­te actors may lead to dis­cri­mi­na­to­ry out­co­mes and the exclu­si­on of cer­tain groups. They may vio­la­te the right to dignity and non-dis­cri­mi­na­ti­on and the values of equa­li­ty and justi­ce. Such AI systems eva­lua­te or clas­si­fy natu­ral per­sons or groups the­reof based on mul­ti­ple data points rela­ted to their social beha­viour in mul­ti­ple con­texts or known, infer­red or pre­dic­ted per­so­nal or per­so­na­li­ty cha­rac­te­ri­stics over cer­tain peri­ods of time. The social score obtai­ned from such AI systems may lead to the detri­men­tal or unfa­voura­ble tre­at­ment of natu­ral per­sons or who­le groups the­reof in social con­texts, which are unre­la­ted to the con­text in which the data was ori­gi­nal­ly gene­ra­ted or coll­ec­ted or to a detri­men­tal tre­at­ment that is dis­pro­por­tio­na­te or unju­sti­fi­ed to the gra­vi­ty of their social beha­viour. AI systems ent­ail­ing such unac­cep­ta­ble scoring prac­ti­ces lea­ding to such detri­men­tal or unfa­voura­ble out­co­mes should be the­r­e­fo­re pro­hi­bi­ted. This pro­hi­bi­ti­on should not affect lawful eva­lua­ti­on prac­ti­ces of natu­ral per­sons done for a spe­ci­fic pur­po­se in com­pli­ance with natio­nal and Uni­on law. (18) The use of AI systems for ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on of natu­ral per­sons in publicly acce­s­si­ble spaces for the pur­po­se of law enforce­ment is par­ti­cu­lar­ly intru­si­ve to the rights and free­doms of the con­cer­ned per­sons, to the ext­ent that it may affect the pri­va­te life of a lar­ge part of the popu­la­ti­on, evo­ke a fee­ling of con­stant sur­veil­lan­ce and indi­rect­ly dissua­de the exer­cise of the free­dom of assem­bly and other fun­da­men­tal rights. Tech­ni­cal inac­cu­ra­ci­es of AI systems inten­ded for the remo­te bio­me­tric iden­ti­fi­ca­ti­on of natu­ral per­sons can lead to bia­sed results and ent­ail dis­cri­mi­na­to­ry effects. This is par­ti­cu­lar­ly rele­vant when it comes to age, eth­ni­ci­ty, race, sex or disa­bi­li­ties. In addi­ti­on, the imme­dia­cy of the impact and the limi­t­ed oppor­tu­ni­ties for fur­ther checks or cor­rec­tions in rela­ti­on to the use of such systems ope­ra­ting in ‘real-time’ car­ry heigh­ten­ed risks for the rights and free­doms of the per­sons that are con­cer­ned by law enforce­ment acti­vi­ties. (19) The use of tho­se systems for the pur­po­se of law enforce­ment should the­r­e­fo­re be pro­hi­bi­ted, except in exhaus­tively listed and nar­row­ly defi­ned situa­tions, whe­re the use is strict­ly neces­sa­ry to achie­ve a sub­stan­ti­al public inte­rest, the importance of which out­weighs the risks. Tho­se situa­tions invol­ve the search for cer­tain vic­tims of crime inclu­ding miss­ing peo­p­le; cer­tain thre­ats to the life or phy­si­cal safe­ty of natu­ral per­sons or of a ter­ro­rist attack; and the loca­li­sa­ti­on or iden­ti­fi­ca­ti­on of per­pe­tra­tors or suspects of the cri­mi­nal offen­ces refer­red to in Annex IIa if tho­se cri­mi­nal offen­ces are punis­ha­ble in the Mem­ber Sta­te con­cer­ned by a cus­to­di­al sen­tence or a detenti­on order for a maxi­mum peri­od of at least four years and as they are defi­ned in the law of that Mem­ber Sta­te. Such thres­hold for the cus­to­di­al sen­tence or detenti­on order in accordance with natio­nal law con­tri­bu­tes to ensu­re that the offence should be serious enough to poten­ti­al­ly justi­fy the use of ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on systems. Moreo­ver, the list of cri­mi­nal offen­ces as refer­red in Annex IIa is based on the 32 cri­mi­nal offen­ces listed in the Coun­cil Frame­work Decis­i­on 2002/584/JHA9, taking into account that some are in prac­ti­ce likely to be more rele­vant than others, in that the recour­se to ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on will fore­see­ab­ly be neces­sa­ry and pro­por­tio­na­te to high­ly vary­ing degrees for the prac­ti­cal pur­su­it of the loca­li­sa­ti­on or iden­ti­fi­ca­ti­on of a per­pe­tra­tor or suspect of the dif­fe­rent cri­mi­nal offen­ces listed and having regard to the likely dif­fe­ren­ces in the serious­ness, pro­ba­bi­li­ty and sca­le of the harm or pos­si­ble nega­ti­ve con­se­quen­ces. An immi­nent thre­at to life or phy­si­cal safe­ty of natu­ral per­sons could also result from a serious dis­rup­ti­on of cri­ti­cal infras­truc­tu­re, as defi­ned in Artic­le 2, point (a) of Direc­ti­ve 2008/114/EC, whe­re the dis­rup­ti­on or des­truc­tion of such cri­ti­cal infras­truc­tu­re would result in an immi­nent thre­at to life or phy­si­cal safe­ty of a per­son, inclu­ding through serious harm to the pro­vi­si­on of basic sup­plies to the popu­la­ti­on or to the exer­cise of the core func­tion of the Sta­te. In addi­ti­on, this Regu­la­ti­on should pre­ser­ve the abili­ty for law enforce­ment, bor­der con­trol, immi­gra­ti­on or asyl­um aut­ho­ri­ties to car­ry out iden­ti­ty checks in the pre­sence of the per­son that is con­cer­ned in accordance with the con­di­ti­ons set out in Uni­on and natio­nal law for such checks. In par­ti­cu­lar, law enforce­ment, bor­der con­trol, immi­gra­ti­on or asyl­um aut­ho­ri­ties should be able to use infor­ma­ti­on systems, in accordance with Uni­on or natio­nal law, to iden­ti­fy a per­son who, during an iden­ti­ty check, eit­her refu­ses to be iden­ti­fi­ed or is unable to sta­te or pro­ve his or her iden­ti­ty, wit­hout being requi­red by this Regu­la­ti­on to obtain pri­or aut­ho­ri­sa­ti­on. This could be, for exam­p­le, a per­son invol­ved in a crime, being unwil­ling, or unable due to an acci­dent or a medi­cal con­di­ti­on, to dis­c­lo­se their iden­ti­ty to law enforce­ment aut­ho­ri­ties. (20) In order to ensu­re that tho­se systems are used in a respon­si­ble and pro­por­tio­na­te man­ner, it is also important to estab­lish that, in each of tho­se exhaus­tively listed and nar­row­ly defi­ned situa­tions, cer­tain ele­ments should be taken into account, in par­ti­cu­lar as regards the natu­re of the situa­ti­on giving rise to the request and the con­se­quen­ces of the use for the rights and free­doms of all per­sons con­cer­ned and the safe­guards and con­di­ti­ons pro­vi­ded for with the use. In addi­ti­on, the use of ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on systems in publicly acce­s­si­ble spaces for the pur­po­se of law enforce­ment should only be deployed to con­firm the spe­ci­fi­cal­ly tar­get individual’s iden­ti­ty and should be limi­t­ed to what is strict­ly neces­sa­ry con­cer­ning the peri­od of time as well as geo­gra­phic and per­so­nal scope, having regard in par­ti­cu­lar to the evi­dence or indi­ca­ti­ons regar­ding the thre­ats, the vic­tims or per­pe­tra­tor. The use of the ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on system in publicly acce­s­si­ble spaces should only be aut­ho­ri­sed if the law enforce­ment aut­ho­ri­ty has com­ple­ted a fun­da­men­tal rights impact assess­ment and, unless pro­vi­ded other­wi­se in this Regu­la­ti­on, has regi­stered the system in the data­ba­se as set out in this Regu­la­ti­on. The refe­rence data­ba­se of per­sons should be appro­pria­te for each use case in each of the situa­tions men­tio­ned abo­ve. (21) Each use of a ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on system in publicly acce­s­si­ble spaces for the pur­po­se of law enforce­ment should be sub­ject to an express and spe­ci­fic aut­ho­ri­sa­ti­on by a judi­cial aut­ho­ri­ty or by an inde­pen­dent admi­ni­stra­ti­ve aut­ho­ri­ty who­se decis­i­on is bin­ding of a Mem­ber Sta­te. Such aut­ho­ri­sa­ti­on should in prin­ci­ple be obtai­ned pri­or to the use of the system with a view to iden­ti­fy a per­son or per­sons. Excep­ti­ons to this rule should be allo­wed in duly justi­fi­ed situa­tions of urgen­cy, that is, situa­tions whe­re the need to use the systems in que­sti­on is such as to make it effec­tively and objec­tively impos­si­ble to obtain an aut­ho­ri­sa­ti­on befo­re com­men­cing the use. In such situa­tions of urgen­cy, the use should be rest­ric­ted to the abso­lu­te mini­mum neces­sa­ry and be sub­ject to appro­pria­te safe­guards and con­di­ti­ons, as deter­mi­ned in natio­nal law and spe­ci­fi­ed in the con­text of each indi­vi­du­al urgent use case by the law enforce­ment aut­ho­ri­ty its­elf. In addi­ti­on, the law enforce­ment aut­ho­ri­ty should in such situa­tions request such aut­ho­ri­sa­ti­on whilst pro­vi­ding the rea­sons for not having been able to request it ear­lier, wit­hout undue delay and, at the latest within 24 hours. If such aut­ho­ri­sa­ti­on is rejec­ted, the use of real-time bio­me­tric iden­ti­fi­ca­ti­on systems lin­ked to that aut­ho­ri­sa­ti­on should be stop­ped with imme­dia­te effect and all the data rela­ted to such use should be dis­card­ed and dele­ted. Such data inclu­des input data direct­ly acqui­red by an AI system in the cour­se of the use of such system as well as the results and out­puts of the use lin­ked to that aut­ho­ri­sa­ti­on. It should not include input legal­ly acqui­red in accordance with ano­ther natio­nal or Uni­on law. In any case, no decis­i­on pro­du­cing an adver­se legal effect on a per­son may be taken sole­ly based on the out­put of the remo­te bio­me­tric iden­ti­fi­ca­ti­on system. (21a) In order to car­ry out their tasks in accordance with the requi­re­ments set out in this Regu­la­ti­on as well as in natio­nal rules, the rele­vant mar­ket sur­veil­lan­ce aut­ho­ri­ty and the natio­nal data pro­tec­tion aut­ho­ri­ty should be noti­fi­ed of each use of the ‘real-time bio­me­tric iden­ti­fi­ca­ti­on system’. Natio­nal mar­ket sur­veil­lan­ce aut­ho­ri­ties and the natio­nal data pro­tec­tion aut­ho­ri­ties that have been noti­fi­ed should sub­mit to the Com­mis­si­on an annu­al report on the use of ‘real-time bio­me­tric iden­ti­fi­ca­ti­on systems’. (22) Fur­ther­mo­re, it is appro­pria­te to pro­vi­de, within the exhaus­ti­ve frame­work set by this Regu­la­ti­on that such use in the ter­ri­to­ry of a Mem­ber Sta­te in accordance with this Regu­la­ti­on should only be pos­si­ble whe­re and in as far as the Mem­ber Sta­te in que­sti­on has deci­ded to express­ly pro­vi­de for the pos­si­bi­li­ty to aut­ho­ri­se such use in its detail­ed rules of natio­nal law. Con­se­quent­ly, Mem­ber Sta­tes remain free under this Regu­la­ti­on not to pro­vi­de for such a pos­si­bi­li­ty at all or to only pro­vi­de for such a pos­si­bi­li­ty in respect of some of the objec­ti­ves capa­ble of justi­fy­ing aut­ho­ri­sed use iden­ti­fi­ed in this Regu­la­ti­on. The­se natio­nal rules should be noti­fi­ed to the Com­mis­si­on at the latest 30 days fol­lo­wing their adop­ti­on. (23) The use of AI systems for ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on of natu­ral per­sons in publicly acce­s­si­ble spaces for the pur­po­se of law enforce­ment neces­s­a­ri­ly invol­ves the pro­ce­s­sing of bio­me­tric data. The rules of this Regu­la­ti­on that pro­hi­bit, sub­ject to cer­tain excep­ti­ons, such use, which are based on Artic­le 16 TFEU, should app­ly as lex spe­cia­lis in respect of the rules on the pro­ce­s­sing of bio­me­tric data con­tai­ned in Artic­le 10 of Direc­ti­ve (EU) 2016/680, thus regu­la­ting such use and the pro­ce­s­sing of bio­me­tric data invol­ved in an exhaus­ti­ve man­ner. The­r­e­fo­re, such use and pro­ce­s­sing should only be pos­si­ble in as far as it is com­pa­ti­ble with the frame­work set by this Regu­la­ti­on, wit­hout the­re being scope, out­side that frame­work, for the com­pe­tent aut­ho­ri­ties, whe­re they act for pur­po­se of law enforce­ment, to use such systems and pro­cess such data in con­nec­tion the­re­to on the grounds listed in Artic­le 10 of Direc­ti­ve (EU) 2016/680. In this con­text, this Regu­la­ti­on is not inten­ded to pro­vi­de the legal basis for the pro­ce­s­sing of per­so­nal data under Artic­le 8 of Direc­ti­ve 2016/680. Howe­ver, the use of ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on systems in publicly acce­s­si­ble spaces for pur­po­ses other than law enforce­ment, inclu­ding by com­pe­tent aut­ho­ri­ties, should not be cover­ed by the spe­ci­fic frame­work regar­ding such use for the pur­po­se of law enforce­ment set by this Regu­la­ti­on. Such use for pur­po­ses other than law enforce­ment should the­r­e­fo­re not be sub­ject to the requi­re­ment of an aut­ho­ri­sa­ti­on under this Regu­la­ti­on and the appli­ca­ble detail­ed rules of natio­nal law that may give effect to it. (24) Any pro­ce­s­sing of bio­me­tric data and other per­so­nal data invol­ved in the use of AI systems for bio­me­tric iden­ti­fi­ca­ti­on, other than in con­nec­tion to the use of ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on systems in publicly acce­s­si­ble spaces for the pur­po­se of law enforce­ment as regu­la­ted by this Regu­la­ti­on, should con­ti­n­ue to com­ply with all requi­re­ments resul­ting from Artic­le 10 of Direc­ti­ve (EU) 2016/680. For pur­po­ses other than law enforce­ment, Artic­le 9(1) of Regu­la­ti­on (EU) 2016/679 and Artic­le 10(1) of Regu­la­ti­on (EU) 2018/1725 pro­hi­bit the pro­ce­s­sing of bio­me­tric data sub­ject to limi­t­ed excep­ti­ons as pro­vi­ded in tho­se artic­les. In appli­ca­ti­on of Artic­le 9(1) of Regu­la­ti­on (EU) 2016/679, the use of remo­te bio­me­tric iden­ti­fi­ca­ti­on for pur­po­ses other than law enforce­ment has alre­a­dy been sub­ject to pro­hi­bi­ti­on decis­i­ons by natio­nal data pro­tec­tion aut­ho­ri­ties. (25) In accordance with Artic­le 6a of Pro­to­col No 21 on the posi­ti­on of the United King­dom and Ire­land in respect of the area of free­dom, secu­ri­ty and justi­ce, as anne­xed to the TEU and to the TFEU, Ire­land is not bound by the rules laid down in Artic­le 5(1), point (d), (2), (3), (3a), (4) and (5), Artic­le 5(1)(ba) to the ext­ent it applies to the use of bio­me­tric cate­go­ri­sa­ti­on systems for acti­vi­ties in the field of poli­ce coope­ra­ti­on and judi­cial coope­ra­ti­on in cri­mi­nal mat­ters, Artic­le 5(1)(da) to the ext­ent it applies to the use of AI systems cover­ed by that pro­vi­si­on and Artic­le 29(6a) of this Regu­la­ti­on adopted on the basis of Artic­le 16 of the TFEU which rela­te to the pro­ce­s­sing of per­so­nal data by the Mem­ber Sta­tes when car­ry­ing out acti­vi­ties fal­ling within the scope of Chap­ter 4 or Chap­ter 5 of Tit­le V of Part Three of the TFEU, whe­re Ire­land is not bound by the rules gover­ning the forms of judi­cial coope­ra­ti­on in cri­mi­nal mat­ters or poli­ce coope­ra­ti­on which requi­re com­pli­ance with the pro­vi­si­ons laid down on the basis of Artic­le 16 of the TFEU. (26) In accordance with Artic­les 2 and 2a of Pro­to­col No 22 on the posi­ti­on of Den­mark, anne­xed to the TEU and TFEU, Den­mark is not bound by rules laid down in Artic­le 5(1), point (d), (2), (3), (3a), (4) and (5), Artic­le 5(1)(ba) to the ext­ent it applies to the use of bio­me­tric cate­go­ri­sa­ti­on systems for acti­vi­ties in the field of poli­ce coope­ra­ti­on and judi­cial coope­ra­ti­on in cri­mi­nal mat­ters, Artic­le 5(1)(da) to the ext­ent it applies to the use of AI systems cover­ed by that pro­vi­si­on and Artic­le 29(6a) of this Regu­la­ti­on adopted on the basis of Artic­le 16 of the TFEU, or sub­ject to their appli­ca­ti­on, which rela­te to the pro­ce­s­sing of per­so­nal data by the Mem­ber Sta­tes when car­ry­ing out acti­vi­ties fal­ling within the scope of Chap­ter 4 or Chap­ter 5 of Tit­le V of Part Three of the TFEU. (26a) In line with the pre­sump­ti­on of inno­cence, natu­ral per­sons in the EU should always be jud­ged on their actu­al beha­viour. Natu­ral per­sons should never be jud­ged on AI-pre­dic­ted beha­viour based sole­ly on their pro­fil­ing, per­so­na­li­ty traits or cha­rac­te­ri­stics, such as natio­na­li­ty, place of birth, place of resi­dence, num­ber of child­ren, debt, their type of car, wit­hout a rea­sonable sus­pi­ci­on of that per­son being invol­ved in a cri­mi­nal acti­vi­ty based on objec­ti­ve veri­fia­ble facts and wit­hout human assess­ment the­reof. The­r­e­fo­re, risk assess­ments of natu­ral per­sons in order to assess the risk of them offen­ding or for pre­dic­ting the occur­rence of an actu­al or poten­ti­al cri­mi­nal offence sole­ly based on the pro­fil­ing of a natu­ral per­son or on asses­sing their per­so­na­li­ty traits and cha­rac­te­ri­stics should be pro­hi­bi­ted. In any case, this pro­hi­bi­ti­on does not refer to nor touch upon risk ana­ly­tics that are not based on the pro­fil­ing of indi­vi­du­als or on the per­so­na­li­ty traits and cha­rac­te­ri­stics of indi­vi­du­als, such as AI systems using risk ana­ly­tics to assess the risk of finan­cial fraud by under­ta­kings based on sus­pi­cious tran­sac­tions or risk ana­ly­tic tools to pre­dict the likeli­hood of loca­li­sa­ti­on of nar­co­tics or illi­cit goods by cus­toms aut­ho­ri­ties, for exam­p­le based on known traf­ficking rou­tes. (26b) The pla­cing on the mar­ket, put­ting into ser­vice for this spe­ci­fic pur­po­se, or use of AI systems that crea­te or expand facial reco­gni­ti­on data­ba­ses through the unt­ar­ge­ted scra­ping of facial images from the inter­net or CCTV foota­ge should be pro­hi­bi­ted, as this prac­ti­ce adds to the fee­ling of mass sur­veil­lan­ce and can lead to gross vio­la­ti­ons of fun­da­men­tal rights, inclu­ding the right to pri­va­cy. (26c) The­re are serious con­cerns about the sci­en­ti­fic basis of AI systems aiming to iden­ti­fy or infer emo­ti­ons, par­ti­cu­lar­ly as expres­si­on of emo­ti­ons vary con­sider­a­b­ly across cul­tures and situa­tions, and even within a sin­gle indi­vi­du­al. Among the key short­co­mings of such systems are the limi­t­ed relia­bi­li­ty, the lack of spe­ci­fi­ci­ty and the limi­t­ed gene­ra­liza­bi­li­ty. The­r­e­fo­re, AI systems iden­ti­fy­ing or infer­ring emo­ti­ons or inten­ti­ons of natu­ral per­sons on the basis of their bio­me­tric data may lead to dis­cri­mi­na­to­ry out­co­mes and can be intru­si­ve to the rights and free­doms of the con­cer­ned per­sons. Con­side­ring the imba­lan­ce of power in the con­text of work or edu­ca­ti­on, com­bi­ned with the intru­si­ve natu­re of the­se systems, such systems could lead to detri­men­tal or unfa­voura­ble tre­at­ment of cer­tain natu­ral per­sons or who­le groups the­reof. The­r­e­fo­re, the pla­cing on the mar­ket, put­ting into ser­vice, or use of AI systems inten­ded to be used to detect the emo­tio­nal sta­te of indi­vi­du­als in situa­tions rela­ted to the work­place and edu­ca­ti­on should be pro­hi­bi­ted. This pro­hi­bi­ti­on should not cover AI systems pla­ced on the mar­ket strict­ly for medi­cal or safe­ty rea­sons, such as systems inten­ded for the­ra­peu­ti­cal use. (26d) Prac­ti­ces that are pro­hi­bi­ted by Uni­on legis­la­ti­on, inclu­ding data pro­tec­tion law, non- dis­cri­mi­na­ti­on law, con­su­mer pro­tec­tion law, and com­pe­ti­ti­on law, should not be affec­ted by this Regu­la­ti­on. (27) High-risk AI systems should only be pla­ced on the Uni­on mar­ket, put into ser­vice or used if they com­ply with cer­tain man­da­to­ry requi­re­ments. Tho­se requi­re­ments should ensu­re that high-risk AI systems available in the Uni­on or who­se out­put is other­wi­se used in the Uni­on do not pose unac­cep­ta­ble risks to important Uni­on public inte­rests as reco­g­nis­ed and pro­tec­ted by Uni­on law. Fol­lo­wing the New Legis­la­ti­ve Frame­work approach, as cla­ri­fi­ed in Com­mis­si­on noti­ce the ‘Blue Gui­de’ on the imple­men­ta­ti­on of EU pro­duct rules 2022 (C/2022/3637) the gene­ral rule is that seve­ral pie­ces of the EU legis­la­ti­on, such as Regu­la­ti­on (EU) 2017/745 on Medi­cal Devices and Regu­la­ti­on (EU) 2017/746 on In Vitro Dia­gno­stic Devices or Direc­ti­ve 2006/42/EC on Machi­nery, may have to be taken into con­side­ra­ti­on for one pro­duct, sin­ce the making available or put­ting into ser­vice can only take place when the pro­duct com­plies with all appli­ca­ble Uni­on har­mo­ni­sa­ti­on legis­la­ti­on. To ensu­re con­si­sten­cy and avo­id unneces­sa­ry admi­ni­stra­ti­ve bur­den or costs, pro­vi­ders of a pro­duct that con­ta­ins one or more high-risk arti­fi­ci­al intel­li­gence system, to which the requi­re­ments of this Regu­la­ti­on as well as requi­re­ments of the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Annex II, Sec­tion A app­ly, should have a fle­xi­bi­li­ty on ope­ra­tio­nal decis­i­ons on how to ensu­re com­pli­ance of a pro­duct that con­ta­ins one or more arti­fi­ci­al intel­li­gence systems with all appli­ca­ble requi­re­ments of the Uni­on har­mo­ni­s­ed legis­la­ti­on in a best way. AI systems iden­ti­fi­ed as high-risk should be limi­t­ed to tho­se that have a signi­fi­cant harmful impact on the health, safe­ty and fun­da­men­tal rights of per­sons in the Uni­on and such limi­ta­ti­on mini­mi­ses any poten­ti­al rest­ric­tion to inter­na­tio­nal trade, if any. (28) AI systems could have an adver­se impact to health and safe­ty of per­sons, in par­ti­cu­lar when such systems ope­ra­te as safe­ty com­pon­ents of pro­ducts. Con­sist­ent­ly with the objec­ti­ves of Uni­on har­mo­ni­sa­ti­on legis­la­ti­on to faci­li­ta­te the free move­ment of pro­ducts in the inter­nal mar­ket and to ensu­re that only safe and other­wi­se com­pli­ant pro­ducts find their way into the mar­ket, it is important that the safe­ty risks that may be gene­ra­ted by a pro­duct as a who­le due to its digi­tal com­pon­ents, inclu­ding AI systems, are duly pre­ven­ted and miti­ga­ted. For instance, incre­a­sing­ly auto­no­mous robots, whe­ther in the con­text of manu­fac­tu­ring or per­so­nal assi­stance and care should be able to safe­ly ope­ra­te and per­forms their func­tions in com­plex envi­ron­ments. Simi­lar­ly, in the health sec­tor whe­re the sta­kes for life and health are par­ti­cu­lar­ly high, incre­a­sing­ly sophi­sti­ca­ted dia­gno­stics systems and systems sup­port­ing human decis­i­ons should be relia­ble and accu­ra­te. (28a) The ext­ent of the adver­se impact cau­sed by the AI system on the fun­da­men­tal rights pro­tec­ted by the Char­ter is of par­ti­cu­lar rele­van­ce when clas­si­fy­ing an AI system as high- risk. Tho­se rights include the right to human dignity, respect for pri­va­te and fami­ly life, pro­tec­tion of per­so­nal data, free­dom of expres­si­on and infor­ma­ti­on, free­dom of assem­bly and of asso­cia­ti­on, and non-dis­cri­mi­na­ti­on, right to edu­ca­ti­on con­su­mer pro­tec­tion, workers’ rights, rights of per­sons with disa­bi­li­ties, gen­der equa­li­ty, intellec­tu­al pro­per­ty rights, right to an effec­ti­ve reme­dy and to a fair tri­al, right of defence and the pre­sump­ti­on of inno­cence, right to good admi­ni­stra­ti­on. In addi­ti­on to tho­se rights, it is important to high­light that child­ren have spe­ci­fic rights as enshri­ned in Artic­le 24 of the EU Char­ter and in the United Nati­ons Con­ven­ti­on on the Rights of the Child (fur­ther ela­bo­ra­ted in the UNCRC Gene­ral Com­ment No. 25 as regards the digi­tal envi­ron­ment), both of which requi­re con­side­ra­ti­on of the children’s vul­nerabi­li­ties and pro­vi­si­on of such pro­tec­tion and care as neces­sa­ry for their well-being. The fun­da­men­tal right to a high level of envi­ron­men­tal pro­tec­tion enshri­ned in the Char­ter and imple­men­ted in Uni­on poli­ci­es should also be con­side­red when asses­sing the seve­ri­ty of the harm that an AI system can cau­se, inclu­ding in rela­ti­on to the health and safe­ty of per­sons. (29) As regards high-risk AI systems that are safe­ty com­pon­ents of pro­ducts or systems, or which are them­sel­ves pro­ducts or systems fal­ling within the scope of Regu­la­ti­on (EC) No 300/2008 of the Euro­pean Par­lia­ment and of the Council10, Regu­la­ti­on (EU) No 167/2013 of the Euro­pean Par­lia­ment and of the Council11, Regu­la­ti­on (EU) No 168/2013 of the Euro­pean Par­lia­ment and of the Council12, Direc­ti­ve 2014/90/EU of the Euro­pean Par­lia­ment and of the Council13, Direc­ti­ve (EU) 2016/797 of the Euro­pean Par­lia­ment and of the Council14, Regu­la­ti­on (EU) 2018/858 of the Euro­pean Par­lia­ment and of the Council15, Regu­la­ti­on (EU) 2018/1139 of the Euro­pean Par­lia­ment and of the Council16, and Regu­la­ti­on (EU) 2019/2144 of the Euro­pean Par­lia­ment and of the Council17, it is appro­pria­te to amend tho­se acts to ensu­re that the Com­mis­si­on takes into account, on the basis of the tech­ni­cal and regu­la­to­ry spe­ci­fi­ci­ties of each sec­tor, and wit­hout inter­fe­ring with exi­sting gover­nan­ce, con­for­mi­ty assess­ment and enforce­ment mecha­nisms and aut­ho­ri­ties estab­lished the­r­ein, the man­da­to­ry requi­re­ments for high-risk AI systems laid down in this Regu­la­ti­on when adop­ting any rele­vant future dele­ga­ted or imple­men­ting acts on the basis of tho­se acts. (30) As regards AI systems that are safe­ty com­pon­ents of pro­ducts, or which are them­sel­ves pro­ducts, fal­ling within the scope of cer­tain Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Annex II, it is appro­pria­te to clas­si­fy them as high-risk under this Regu­la­ti­on if the pro­duct in que­sti­on under­goes the con­for­mi­ty assess­ment pro­ce­du­re with a third-par­ty con­for­mi­ty assess­ment body pur­su­ant to that rele­vant Uni­on har­mo­ni­sa­ti­on legis­la­ti­on. In par­ti­cu­lar, such pro­ducts are machi­nery, toys, lifts, equip­ment and pro­tec­ti­ve systems inten­ded for use in poten­ti­al­ly explo­si­ve atmo­sphe­res, radio equip­ment, pres­su­re equip­ment, recrea­tio­nal craft equip­ment, cable­way instal­la­ti­ons, appli­ances bur­ning gas­eous fuels, medi­cal devices, and in vitro dia­gno­stic medi­cal devices. (31) The clas­si­fi­ca­ti­on of an AI system as high-risk pur­su­ant to this Regu­la­ti­on should not neces­s­a­ri­ly mean that the pro­duct who­se safe­ty com­po­nent is the AI system, or the AI system its­elf as a pro­duct, is con­side­red ‘high-risk’ under the cri­te­ria estab­lished in the rele­vant Uni­on har­mo­ni­sa­ti­on legis­la­ti­on that applies to the pro­duct. This is nota­b­ly the case for Regu­la­ti­on (EU) 2017/745 of the Euro­pean Par­lia­ment and of the Council18 and No 552/2004 and (EC) No 216/2008 of the Euro­pean Par­lia­ment and of the Coun­cil and Coun­cil Regu­la­ti­on (EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1). Regu­la­ti­on (EU) 2017/746 of the Euro­pean Par­lia­ment and of the Council19, whe­re a third- par­ty con­for­mi­ty assess­ment is pro­vi­ded for medi­um-risk and high-risk pro­ducts. (32) As regards stand-alo­ne AI systems, mea­ning high-risk AI systems other than tho­se that are safe­ty com­pon­ents of pro­ducts, or which are them­sel­ves pro­ducts, it is appro­pria­te to clas­si­fy them as high-risk if, in the light of their inten­ded pur­po­se, they pose a high risk of harm to the health and safe­ty or the fun­da­men­tal rights of per­sons, taking into account both the seve­ri­ty of the pos­si­ble harm and its pro­ba­bi­li­ty of occur­rence and they are used in a num­ber of spe­ci­fi­cal­ly pre-defi­ned are­as spe­ci­fi­ed in the Regu­la­ti­on. The iden­ti­fi­ca­ti­on of tho­se systems is based on the same metho­do­lo­gy and cri­te­ria envi­sa­ged also for any future amend­ments of the list of high-risk AI systems that the Com­mis­si­on should be empowered to adopt, via dele­ga­ted acts, to take into account the rapid pace of tech­no­lo­gi­cal deve­lo­p­ment, as well as the poten­ti­al chan­ges in the use of AI systems. (32a) It is also important to cla­ri­fy that the­re may be spe­ci­fic cases in which AI systems refer­red to pre-defi­ned are­as spe­ci­fi­ed in this Regu­la­ti­on do not lead to a signi­fi­cant risk of harm to the legal inte­rests pro­tec­ted under tho­se are­as, becau­se they do not mate­ri­al­ly influence the decis­i­on-making or do not harm tho­se inte­rests sub­stan­ti­al­ly. For the pur­po­se of this Regu­la­ti­on an AI system not mate­ri­al­ly influen­cing the out­co­me of decis­i­on-making should be under­s­tood as an AI system that does not impact the sub­stance, and ther­eby the out­co­me, of decis­i­on-making, whe­ther human or auto­ma­ted. This could be the case if one or more of the fol­lo­wing con­di­ti­ons are ful­fil­led. The first cri­ter­ion should be that the AI system is inten­ded to per­form a nar­row pro­ce­du­ral task, such as an AI system that trans­forms uns­truc­tu­red data into struc­tu­red data, an AI system that clas­si­fi­es inco­ming docu­ments into cate­go­ries or an AI system that is used to detect dupli­ca­tes among a lar­ge num­ber of appli­ca­ti­ons. The­se tasks are of such nar­row and limi­t­ed natu­re that they pose only limi­t­ed risks which are not increa­sed through the use in a con­text listed in Annex III. The second cri­ter­ion should be that the task per­for­med by the AI system is inten­ded to impro­ve the result of a pre­vious­ly com­ple­ted human acti­vi­ty that may be rele­vant for the pur­po­se of the use case listed in Annex III. Con­side­ring the­se cha­rac­te­ri­stics, the AI system only pro­vi­des an addi­tio­nal lay­er to a human acti­vi­ty with con­se­quent­ly lowe­red risk. For exam­p­le, this cri­ter­ion would app­ly to AI systems that are inten­ded to impro­ve the lan­guage used in pre­vious­ly draf­ted docu­ments, for instance in rela­ti­on to pro­fes­sio­nal tone, aca­de­mic style of lan­guage or by alig­ning text to a cer­tain brand mes­sa­ging. The third cri­ter­ion should be that the AI system is inten­ded to detect decis­i­on-making pat­terns or devia­ti­ons from pri­or decis­i­on-making pat­terns. The risk would be lowe­red becau­se the use of the AI system fol­lows a pre­vious­ly com­ple­ted human assess­ment which it is not meant to replace or influence, wit­hout pro­per human review. Such AI systems include for instance tho­se that, given a cer­tain gra­ding pat­tern of a tea­cher, can be used to check ex post whe­ther the tea­cher may have devia­ted from the gra­ding pat­tern so as to flag poten­ti­al incon­si­sten­ci­es or anoma­lies. The fourth cri­ter­ion should be that the AI system is inten­ded to per­form a task that is only pre­pa­ra­to­ry to an assess­ment rele­vant for the pur­po­se of the use case listed in Annex III, thus making the pos­si­ble impact of the out­put of the system very low in terms of repre­sen­ting a risk for the assess­ment to fol­low. For exam­p­le, this cri­ter­ion covers smart solu­ti­ons for file hand­ling, which include various func­tions from index­ing, sear­ching, text and speech pro­ce­s­sing or lin­king data to other data sources, or AI systems used for trans­la­ti­on of initi­al docu­ments. In any case, AI systems refer­red to in Annex III should be con­side­red to pose signi­fi­cant risks of harm to the health, safe­ty or fun­da­men­tal rights of natu­ral per­sons if the AI system implies pro­fil­ing within the mea­ning of Artic­le 4(4) of Regu­la­ti­on (EU) 2016/679 and Artic­le 3(4) of Direc­ti­ve (EU) 2016/680 and Artic­le 3(5) of Regu­la­ti­on 2018/1725. To ensu­re tracea­bi­li­ty and trans­pa­ren­cy, a pro­vi­der who con­siders that an AI system refer­red to in Annex III is not high-risk on the basis of the afo­re­men­tio­ned cri­te­ria should draw up docu­men­ta­ti­on of the assess­ment befo­re that system is pla­ced on the mar­ket or put into ser­vice and should pro­vi­de this docu­men­ta­ti­on to natio­nal com­pe­tent aut­ho­ri­ties upon request. Such pro­vi­der should be obli­ged to regi­ster the system in the EU data­ba­se estab­lished under this Regu­la­ti­on. With a view to pro­vi­de fur­ther gui­dance for the prac­ti­cal imple­men­ta­ti­on of the cri­te­ria under which AI systems refer­red to in Annex III are excep­tio­nal­ly not high-risk, the Com­mis­si­on should, after con­sul­ting the AI Board, pro­vi­de gui­de­lines spe­ci­fy­ing this prac­ti­cal imple­men­ta­ti­on com­ple­ted by a com­pre­hen­si­ve list of prac­ti­cal examp­les of high risk and non-high risk use cases of AI systems. (33a) As bio­me­tric data con­sti­tu­tes a spe­cial cate­go­ry of sen­si­ti­ve per­so­nal data, it is appro­pria­te to clas­si­fy as high-risk seve­ral cri­ti­cal use-cases of bio­me­tric systems, inso­far as their use is per­mit­ted under rele­vant Uni­on and natio­nal law. Tech­ni­cal inac­cu­ra­ci­es of AI systems inten­ded for the remo­te bio­me­tric iden­ti­fi­ca­ti­on of natu­ral per­sons can lead to bia­sed results and ent­ail dis­cri­mi­na­to­ry effects. This is par­ti­cu­lar­ly rele­vant when it comes to age, eth­ni­ci­ty, race, sex or disa­bi­li­ties. The­r­e­fo­re, remo­te bio­me­tric iden­ti­fi­ca­ti­on systems should be clas­si­fi­ed as high-risk in view of the risks that they pose. This exclu­des AI systems inten­ded to be used for bio­me­tric veri­fi­ca­ti­on, which inclu­des authen­ti­ca­ti­on, who­se sole pur­po­se is to con­firm that a spe­ci­fic natu­ral per­son is the per­son he or she claims to be and to con­firm the iden­ti­ty of a natu­ral per­son for the sole pur­po­se of having access to a ser­vice, unlocking a device or having secu­re access to pre­mi­ses. In addi­ti­on, AI systems inten­ded to be used for bio­me­tric cate­go­ri­sa­ti­on accor­ding to sen­si­ti­ve attri­bu­tes or cha­rac­te­ri­stics pro­tec­ted under Artic­le 9(1) of Regu­la­ti­on (EU) 2016/679 based on bio­me­tric data, in so far as the­se are not pro­hi­bi­ted under this Regu­la­ti­on, and emo­ti­on reco­gni­ti­on systems that are not pro­hi­bi­ted under this Regu­la­ti­on, should be clas­si­fi­ed as high-risk. Bio­me­tric systems which are inten­ded to be used sole­ly for the pur­po­se of enab­ling cyber­se­cu­ri­ty and per­so­nal data pro­tec­tion mea­su­res should not be con­side­red as high-risk systems. (34) As regards the manage­ment and ope­ra­ti­on of cri­ti­cal infras­truc­tu­re, it is appro­pria­te to clas­si­fy as high-risk the AI systems inten­ded to be used as safe­ty com­pon­ents in the manage­ment and ope­ra­ti­on of cri­ti­cal digi­tal infras­truc­tu­re as listed in Annex I point 8 of the Direc­ti­ve on the resi­li­ence of cri­ti­cal enti­ties, road traf­fic and the sup­p­ly of water, gas, hea­ting and elec­tri­ci­ty, sin­ce their fail­ure or mal­func­tio­ning may put at risk the life and health of per­sons at lar­ge sca­le and lead to app­re­cia­ble dis­rup­ti­ons in the ordi­na­ry con­duct of social and eco­no­mic acti­vi­ties. Safe­ty com­pon­ents of cri­ti­cal infras­truc­tu­re, inclu­ding cri­ti­cal digi­tal infras­truc­tu­re, are systems used to direct­ly pro­tect the phy­si­cal inte­gri­ty of cri­ti­cal infras­truc­tu­re or health and safe­ty of per­sons and pro­per­ty but which are not neces­sa­ry in order for the system to func­tion. Fail­ure or mal­func­tio­ning of such com­pon­ents might direct­ly lead to risks to the phy­si­cal inte­gri­ty of cri­ti­cal infras­truc­tu­re and thus to risks to health and safe­ty of per­sons and pro­per­ty. Com­pon­ents inten­ded to be used sole­ly for cyber­se­cu­ri­ty pur­po­ses should not qua­li­fy as safe­ty com­pon­ents. Examp­les of safe­ty com­pon­ents of such cri­ti­cal infras­truc­tu­re may include systems for moni­to­ring water pres­su­re or fire alarm con­trol­ling systems in cloud com­pu­ting cen­tres. (35) Deployment of AI systems in edu­ca­ti­on is important to pro­mo­te high-qua­li­ty digi­tal edu­ca­ti­on and trai­ning and to allow all lear­ners and tea­chers to acqui­re and share the neces­sa­ry digi­tal skills and com­pe­ten­ces, inclu­ding media liter­a­cy, and cri­ti­cal thin­king, to take an acti­ve part in the eco­no­my, socie­ty, and in demo­cra­tic pro­ce­s­ses. Howe­ver, AI systems used in edu­ca­ti­on or voca­tio­nal trai­ning, nota­b­ly for deter­mi­ning access or admis­si­on, for assig­ning per­sons to edu­ca­tio­nal and voca­tio­nal trai­ning insti­tu­ti­ons or pro­gram­mes at all levels, for eva­lua­ting lear­ning out­co­mes of per­sons, for asses­sing the appro­pria­te level of edu­ca­ti­on for an indi­vi­du­al and mate­ri­al­ly influen­cing the level of edu­ca­ti­on and trai­ning that indi­vi­du­als will recei­ve or be able to access or for moni­to­ring and detec­ting pro­hi­bi­ted beha­viour of stu­dents during tests should be clas­si­fi­ed as high-risk AI systems, sin­ce they may deter­mi­ne the edu­ca­tio­nal and pro­fes­sio­nal cour­se of a person’s life and the­r­e­fo­re affect their abili­ty to secu­re their liveli­hood. When impro­per­ly desi­gned and used, such systems can be par­ti­cu­lar­ly intru­si­ve and may vio­la­te the right to edu­ca­ti­on and trai­ning as well as the right not to be dis­cri­mi­na­ted against and per­pe­tua­te histo­ri­cal pat­terns of dis­cri­mi­na­ti­on, for exam­p­le against women, cer­tain age groups, per­sons with disa­bi­li­ties, or per­sons of cer­tain racial or eth­nic ori­g­ins or sexu­al ori­en­ta­ti­on. (36) AI systems used in employment, workers manage­ment and access to self-employment, nota­b­ly for the recruit­ment and sel­ec­tion of per­sons, for making decis­i­ons affec­ting terms of the work rela­ted rela­ti­on­ship pro­mo­ti­on and ter­mi­na­ti­on of work-rela­ted con­trac­tu­al rela­ti­on­ships for allo­ca­ting tasks based on indi­vi­du­al beha­viour, per­so­nal traits or cha­rac­te­ri­stics and for moni­to­ring or eva­lua­ti­on of per­sons in work-rela­ted con­trac­tu­al rela­ti­on­ships, should also be clas­si­fi­ed as high-risk, sin­ce tho­se systems may app­re­cia­bly impact future care­er pro­s­pects, liveli­hoods of the­se per­sons and workers’ rights. Rele­vant work-rela­ted con­trac­tu­al rela­ti­on­ships should meaningful­ly invol­ve employees and per­sons pro­vi­ding ser­vices through plat­forms as refer­red to in the Com­mis­si­on Work Pro­gram­me 2021. Throug­hout the recruit­ment pro­cess and in the eva­lua­ti­on, pro­mo­ti­on, or reten­ti­on of per­sons in work-rela­ted con­trac­tu­al rela­ti­on­ships, such systems may per­pe­tua­te histo­ri­cal pat­terns of dis­cri­mi­na­ti­on, for exam­p­le against women, cer­tain age groups, per­sons with disa­bi­li­ties, or per­sons of cer­tain racial or eth­nic ori­g­ins or sexu­al ori­en­ta­ti­on. AI systems used to moni­tor the per­for­mance and beha­viour of the­se per­sons may also under­mi­ne their fun­da­men­tal rights to data pro­tec­tion and pri­va­cy. (37) Ano­ther area in which the use of AI systems deser­ves spe­cial con­side­ra­ti­on is the access to and enjoy­ment of cer­tain essen­ti­al pri­va­te and public ser­vices and bene­fits neces­sa­ry for peo­p­le to ful­ly par­ti­ci­pa­te in socie­ty or to impro­ve one’s stan­dard of living. In par­ti­cu­lar, natu­ral per­sons app­ly­ing for or recei­ving essen­ti­al public assi­stance bene­fits and ser­vices from public aut­ho­ri­ties name­ly heal­th­ca­re ser­vices, social secu­ri­ty bene­fits, social ser­vices pro­vi­ding pro­tec­tion in cases such as mater­ni­ty, ill­ness, indu­stri­al acci­dents, depen­den­cy or old age and loss of employment and social and housing assi­stance, are typi­cal­ly depen­dent on tho­se bene­fits and ser­vices and in a vul­nerable posi­ti­on in rela­ti­on to the respon­si­ble aut­ho­ri­ties. If AI systems are used for deter­mi­ning whe­ther such bene­fits and ser­vices should be gran­ted, denied, redu­ced, revo­ked or reclai­med by aut­ho­ri­ties, inclu­ding whe­ther bene­fi­ci­a­ries are legi­ti­m­ate­ly entit­led to such bene­fits or ser­vices, tho­se systems may have a signi­fi­cant impact on per­sons’ liveli­hood and may inf­rin­ge their fun­da­men­tal rights, such as the right to social pro­tec­tion, non-dis­cri­mi­na­ti­on, human dignity or an effec­ti­ve reme­dy and should the­r­e­fo­re be clas­si­fi­ed as high-risk. None­thel­ess, this Regu­la­ti­on should not ham­per the deve­lo­p­ment and use of inno­va­ti­ve approa­ches in the public admi­ni­stra­ti­on, which would stand to bene­fit from a wider use of com­pli­ant and safe AI systems, pro­vi­ded that tho­se systems do not ent­ail a high risk to legal and natu­ral per­sons. In addi­ti­on, AI systems used to eva­lua­te the cre­dit score or cre­dit­wort­hi­ness of natu­ral per­sons should be clas­si­fi­ed as high-risk AI systems, sin­ce they deter­mi­ne tho­se per­sons’ access to finan­cial resour­ces or essen­ti­al ser­vices such as housing, elec­tri­ci­ty, and tele­com­mu­ni­ca­ti­on ser­vices. AI systems used for this pur­po­se may lead to dis­cri­mi­na­ti­on of per­sons or groups and per­pe­tua­te histo­ri­cal pat­terns of dis­cri­mi­na­ti­on, for exam­p­le based on racial or eth­nic ori­g­ins, gen­der, disa­bi­li­ties, age, sexu­al ori­en­ta­ti­on, or crea­te new forms of dis­cri­mi­na­to­ry impacts. Howe­ver, AI systems pro­vi­ded for by Uni­on law for the pur­po­se of detec­ting fraud in the offe­ring of finan­cial ser­vices and for pru­den­ti­al pur­po­ses to cal­cu­la­te cre­dit insti­tu­ti­ons’ and insu­ran­ces under­ta­kings’ capi­tal requi­re­ments should not be con­side­red as high-risk under this Regu­la­ti­on. Moreo­ver, AI systems inten­ded to be used for risk assess­ment and pri­cing in rela­ti­on to natu­ral per­sons for health and life insu­rance can also have a signi­fi­cant impact on per­sons’ liveli­hood and if not duly desi­gned, deve­lo­ped and used, can inf­rin­ge their fun­da­men­tal rights and can lead to serious con­se­quen­ces for people’s life and health, inclu­ding finan­cial exclu­si­on and dis­cri­mi­na­ti­on. Final­ly, AI systems used to eva­lua­te and clas­si­fy emer­gen­cy calls by natu­ral per­sons or to dis­patch or estab­lish prio­ri­ty in the dis­patching of emer­gen­cy first respon­se ser­vices, inclu­ding by poli­ce, fire­figh­ters and medi­cal aid, as well as of emer­gen­cy heal­th­ca­re pati­ent tria­ge systems, should also be clas­si­fi­ed as high-risk sin­ce they make decis­i­ons in very cri­ti­cal situa­tions for the life and health of per­sons and their pro­per­ty. (38) Given their role and respon­si­bi­li­ty, actions by law enforce­ment aut­ho­ri­ties invol­ving cer­tain uses of AI systems are cha­rac­te­ri­sed by a signi­fi­cant degree of power imba­lan­ce and may lead to sur­veil­lan­ce, arrest or depri­va­ti­on of a natu­ral person’s liber­ty as well as other adver­se impacts on fun­da­men­tal rights gua­ran­teed in the Char­ter. In par­ti­cu­lar, if the AI system is not trai­ned with high qua­li­ty data, does not meet ade­qua­te requi­re­ments in terms of its per­for­mance, its accu­ra­cy or robust­ness, or is not pro­per­ly desi­gned and tested befo­re being put on the mar­ket or other­wi­se put into ser­vice, it may sin­gle out peo­p­le in a dis­cri­mi­na­to­ry or other­wi­se incor­rect or unjust man­ner. Fur­ther­mo­re, the exer­cise of important pro­ce­du­ral fun­da­men­tal rights, such as the right to an effec­ti­ve reme­dy and to a fair tri­al as well as the right of defence and the pre­sump­ti­on of inno­cence, could be ham­pe­red, in par­ti­cu­lar, whe­re such AI systems are not suf­fi­ci­ent­ly trans­pa­rent, explainable and docu­men­ted. It is the­r­e­fo­re appro­pria­te to clas­si­fy as high-risk, inso­far as their use is per­mit­ted under rele­vant Uni­on and natio­nal law, a num­ber of AI systems inten­ded to be used in the law enforce­ment con­text whe­re accu­ra­cy, relia­bi­li­ty and trans­pa­ren­cy is par­ti­cu­lar­ly important to avo­id adver­se impacts, retain public trust and ensu­re accoun­ta­bi­li­ty and effec­ti­ve redress. In view of the natu­re of the acti­vi­ties in que­sti­on and the risks rela­ting the­re­to, tho­se high-risk AI systems should include in par­ti­cu­lar AI systems inten­ded to be used by or on behalf of law enforce­ment aut­ho­ri­ties or by Uni­on agen­ci­es, offices or bodies in sup­port of law enforce­ment aut­ho­ri­ties for asses­sing the risk of a natu­ral per­son to beco­me a vic­tim of cri­mi­nal offen­ces, as poly­graphs and simi­lar tools , for the eva­lua­ti­on of the relia­bi­li­ty of evi­dence in in the cour­se of inve­sti­ga­ti­on or pro­se­cu­ti­on of cri­mi­nal offen­ces, and, inso­far not pro­hi­bi­ted under this regu­la­ti­on, for asses­sing the risk of a natu­ral per­son of offen­ding or reof­fen­ding not sole­ly based on pro­fil­ing of natu­ral per­sons nor based on asses­sing per­so­na­li­ty traits and cha­rac­te­ri­stics or past cri­mi­nal beha­viour of natu­ral per­sons or groups, for pro­fil­ing in the cour­se of detec­tion, inve­sti­ga­ti­on or pro­se­cu­ti­on of cri­mi­nal offen­ces, . AI systems spe­ci­fi­cal­ly inten­ded to be used for admi­ni­stra­ti­ve pro­ce­e­dings by tax and cus­toms aut­ho­ri­ties as well as by finan­cial intel­li­gence units car­ry­ing out admi­ni­stra­ti­ve tasks ana­ly­sing infor­ma­ti­on pur­su­ant to Uni­on anti-money laun­de­ring legis­la­ti­on should not be clas­si­fi­ed as high-risk AI systems used by law enforce­ment aut­ho­ri­ties for the pur­po­ses of pre­ven­ti­on, detec­tion, inve­sti­ga­ti­on and pro­se­cu­ti­on of cri­mi­nal offen­ces. The use of AI tools by law enforce­ment and aut­ho­ri­ties should not beco­me a fac­tor of ine­qua­li­ty, or exclu­si­on. The impact of the use of AI tools on the defence rights of suspects should not be igno­red, nota­b­ly the dif­fi­cul­ty in obtai­ning meaningful infor­ma­ti­on on the func­tio­ning of the­se systems and the con­se­quent dif­fi­cul­ty in chal­len­ging their results in court, in par­ti­cu­lar by indi­vi­du­als under inve­sti­ga­ti­on. (39) AI systems used in migra­ti­on, asyl­um and bor­der con­trol manage­ment affect peo­p­le who are often in par­ti­cu­lar­ly vul­nerable posi­ti­on and who are depen­dent on the out­co­me of the actions of the com­pe­tent public aut­ho­ri­ties. The accu­ra­cy, non-dis­cri­mi­na­to­ry natu­re and trans­pa­ren­cy of the AI systems used in tho­se con­texts are the­r­e­fo­re par­ti­cu­lar­ly important to gua­ran­tee the respect of the fun­da­men­tal rights of the affec­ted per­sons, nota­b­ly their rights to free move­ment, non-dis­cri­mi­na­ti­on, pro­tec­tion of pri­va­te life and per­so­nal data, inter­na­tio­nal pro­tec­tion and good admi­ni­stra­ti­on. It is the­r­e­fo­re appro­pria­te to clas­si­fy as high-risk, inso­far as their use is per­mit­ted under rele­vant Uni­on and natio­nal law AI systems inten­ded to be used by or on behalf of com­pe­tent public aut­ho­ri­ties or by Uni­on agen­ci­es, offices or bodies char­ged with tasks in the fields of migra­ti­on, asyl­um and bor­der con­trol manage­ment as poly­graphs and simi­lar tools, for asses­sing cer­tain risks posed by natu­ral per­sons ente­ring the ter­ri­to­ry of a Mem­ber Sta­te or app­ly­ing for visa or asyl­um, for assi­sting com­pe­tent public aut­ho­ri­ties for the exami­na­ti­on, inclu­ding rela­ted assess­ment of the relia­bi­li­ty of evi­dence, of appli­ca­ti­ons for asyl­um, visa and resi­dence per­mits and asso­cia­ted com­plaints with regard to the objec­ti­ve to estab­lish the eli­gi­bi­li­ty of the natu­ral per­sons app­ly­ing for a sta­tus, for the pur­po­se of detec­ting, reco­g­nis­ing or iden­ti­fy­ing natu­ral per­sons in the con­text of migra­ti­on, asyl­um and bor­der con­trol manage­ment with the excep­ti­on of tra­vel docu­ments. AI systems in the area of migra­ti­on, asyl­um and bor­der con­trol manage­ment cover­ed by this Regu­la­ti­on should com­ply with the rele­vant pro­ce­du­ral requi­re­ments set by the Direc­ti­ve 2013/32/EU of the Euro­pean Par­lia­ment and of the Council20, the Regu­la­ti­on (EC) No 810/2009 of the Euro­pean Par­lia­ment and of the Council21 and other rele­vant legis­la­ti­on. The use of AI systems in migra­ti­on, asyl­um and bor­der con­trol manage­ment should in no cir­cum­stances be used by Mem­ber Sta­tes or Uni­on insti­tu­ti­ons, agen­ci­es or bodies as a means to cir­cum­vent their inter­na­tio­nal obli­ga­ti­ons under the Con­ven­ti­on of 28 July 1951 rela­ting to the Sta­tus of Refu­gees as amen­ded by the Pro­to­col of 31 Janu­ary 1967, nor should they be used to in any way inf­rin­ge on the prin­ci­ple of non-refou­le­ment, or deny safe and effec­ti­ve legal ave­nues into the ter­ri­to­ry of the Uni­on, inclu­ding the right to inter­na­tio­nal pro­tec­tion. (40) Cer­tain AI systems inten­ded for the admi­ni­stra­ti­on of justi­ce and demo­cra­tic pro­ce­s­ses should be clas­si­fi­ed as high-risk, con­side­ring their poten­ti­al­ly signi­fi­cant impact on demo­cra­cy, rule of law, indi­vi­du­al free­doms as well as the right to an effec­ti­ve reme­dy and to a fair tri­al. In par­ti­cu­lar, to address the risks of poten­ti­al bia­ses, errors and opa­ci­ty, it is appro­pria­te to qua­li­fy as high-risk AI systems inten­ded to be used by a judi­cial aut­ho­ri­ty or on its behalf to assist judi­cial aut­ho­ri­ties in rese­ar­ching and inter­pre­ting facts and the law and in app­ly­ing the law to a con­cre­te set of facts. AI systems inten­ded to be used by alter­na­ti­ve dis­pu­te reso­lu­ti­on bodies for tho­se pur­po­ses should also be con­side­red high-risk when the out­co­mes of the alter­na­ti­ve dis­pu­te reso­lu­ti­on pro­ce­e­dings pro­du­ce legal effects for the par­ties. The use of arti­fi­ci­al intel­li­gence tools can sup­port the decis­i­on-making power of jud­ges or judi­cial inde­pen­dence, but should not replace it, as the final decis­i­on- making must remain a human-dri­ven acti­vi­ty and decis­i­on. Such qua­li­fi­ca­ti­on should not extend, howe­ver, to AI systems inten­ded for purely ancil­la­ry admi­ni­stra­ti­ve acti­vi­ties that do not affect the actu­al admi­ni­stra­ti­on of justi­ce in indi­vi­du­al cases, such as anony­mi­sa­ti­on or pseud­ony­mi­sa­ti­on of judi­cial decis­i­ons, docu­ments or data, com­mu­ni­ca­ti­on bet­ween per­son­nel, admi­ni­stra­ti­ve tasks. (40a) Wit­hout pre­ju­di­ce to the rules pro­vi­ded for in [Regu­la­ti­on xxx on the trans­pa­ren­cy and tar­ge­ting of poli­ti­cal adver­ti­sing], and in order to address the risks of undue exter­nal inter­fe­rence to the right to vote enshri­ned in Artic­le 39 of the Char­ter, and of adver­se effects on demo­cra­cy, and the rule of law, AI systems inten­ded to be used to influence the out­co­me of an elec­tion or refe­ren­dum or the voting beha­viour of natu­ral per­sons in the exer­cise of their vote in elec­tions or refe­ren­da should be clas­si­fi­ed as high-risk AI systems with the excep­ti­on of AI systems who­se out­put natu­ral per­sons are not direct­ly expo­sed to, such as tools used to orga­ni­se, opti­mi­se and struc­tu­re poli­ti­cal cam­paigns from an admi­ni­stra­ti­ve and logi­sti­cal point of view. (41) The fact that an AI system is clas­si­fi­ed as a high-risk AI system under this Regu­la­ti­on should not be inter­pre­ted as indi­ca­ting that the use of the system is lawful under other acts of Uni­on law or under natio­nal law com­pa­ti­ble with Uni­on law, such as on the pro­tec­tion of per­so­nal data, on the use of poly­graphs and simi­lar tools or other systems to detect the emo­tio­nal sta­te of natu­ral per­sons. Any such use should con­ti­n­ue to occur sole­ly in accordance with the appli­ca­ble requi­re­ments resul­ting from the Char­ter and from the appli­ca­ble acts of secon­da­ry Uni­on law and natio­nal law. This Regu­la­ti­on should not be under­s­tood as pro­vi­ding for the legal ground for pro­ce­s­sing of per­so­nal data, inclu­ding spe­cial cate­go­ries of per­so­nal data, whe­re rele­vant, unless it is spe­ci­fi­cal­ly pro­vi­ded for other­wi­se in this Regu­la­ti­on. (42) To miti­ga­te the risks from high-risk AI systems pla­ced on the mar­ket or put into ser­vice and to ensu­re a high level of trust­wort­hi­ness, cer­tain man­da­to­ry requi­re­ments should app­ly to high-risk AI systems, taking into account the inten­ded pur­po­se and the con­text of use of the AI system and accor­ding to the risk manage­ment system to be estab­lished by the pro­vi­der. The mea­su­res adopted by the pro­vi­ders to com­ply with the man­da­to­ry requi­re­ments of this Regu­la­ti­on should take into account the gene­ral­ly ack­now­ledge sta­te of the art on arti­fi­ci­al intel­li­gence, be pro­por­tio­na­te and effec­ti­ve to meet the objec­ti­ves of this Regu­la­ti­on. Fol­lo­wing the New Legis­la­ti­ve Frame­work approach, as cla­ri­fi­ed in Com­mis­si­on noti­ce the ‘Blue Gui­de’ on the imple­men­ta­ti­on of EU pro­duct rules 2022 (C/2022/3637), the gene­ral rule is that seve­ral pie­ces of the EU legis­la­ti­on may have to be taken into con­side­ra­ti­on for one pro­duct, sin­ce the making available or put­ting into ser­vice can only take place when the pro­duct com­plies with all appli­ca­ble Uni­on har­mo­ni­sa­ti­on legis­la­ti­on. Hazards of AI systems cover­ed by the requi­re­ments of this Regu­la­ti­on con­cern dif­fe­rent aspects than the exi­sting Uni­on har­mo­ni­sa­ti­on acts and the­r­e­fo­re the requi­re­ments of this Regu­la­ti­on would com­ple­ment the exi­sting body of the Uni­on har­mo­ni­sa­ti­on acts. F exam­p­le, machi­nery or medi­cal devices pro­ducts incor­po­ra­ting an AI system might pre­sent risks not addres­sed by the essen­ti­al health and safe­ty requi­re­ments set out in the rele­vant Uni­on har­mo­ni­s­ed legis­la­ti­on, as this sec­to­ral legis­la­ti­on does not deal with risks spe­ci­fic to AI systems. This calls for a simul­ta­neous and com­ple­men­ta­ry appli­ca­ti­on of the various legis­la­ti­ve acts. To ensu­re con­si­sten­cy and avo­id unneces­sa­ry admi­ni­stra­ti­ve bur­den or costs, pro­vi­ders of a pro­duct that con­ta­ins one or more high-risk arti­fi­ci­al intel­li­gence system, to which the requi­re­ments of this Regu­la­ti­on as well as requi­re­ments of the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Annex II, Sec­tion A app­ly, should have a fle­xi­bi­li­ty on ope­ra­tio­nal decis­i­ons on how to ensu­re com­pli­ance of a pro­duct that con­ta­ins one or more arti­fi­ci­al intel­li­gence systems with all appli­ca­ble requi­re­ments of the Uni­on har­mo­ni­s­ed legis­la­ti­on in a best way. This fle­xi­bi­li­ty could mean, for exam­p­le a decis­i­on by the pro­vi­der to inte­gra­te a part of the neces­sa­ry test­ing and report­ing pro­ce­s­ses, infor­ma­ti­on and docu­men­ta­ti­on requi­red under this Regu­la­ti­on into alre­a­dy exi­sting docu­men­ta­ti­on and pro­ce­du­res requi­red under the exi­sting Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Annex II, Sec­tion A. This howe­ver should not in any way under­mi­ne the obli­ga­ti­on of the pro­vi­der to com­ply with all the appli­ca­ble requi­re­ments. (42a) The risk manage­ment system should con­sist of a con­ti­nuous, ite­ra­ti­ve pro­cess that is plan­ned and run throug­hout the enti­re life­cy­cle of a high-risk AI system. This pro­cess should be aimed at iden­ti­fy­ing and miti­ga­ting the rele­vant risks of arti­fi­ci­al intel­li­gence systems on health, safe­ty and fun­da­men­tal rights. The risk manage­ment system should be regu­lar­ly review­ed and updated to ensu­re its con­ti­nuing effec­ti­ve­ness, as well as justi­fi­ca­ti­on and docu­men­ta­ti­on of any signi­fi­cant decis­i­ons and actions taken sub­ject to this Regu­la­ti­on. This pro­cess should ensu­re that the pro­vi­der iden­ti­fi­es risks or adver­se pacts and imple­ments miti­ga­ti­on mea­su­res for the known and rea­son­ab­ly fore­seeable risks of arti­fi­ci­al intel­li­gence systems to the health, safe­ty and fun­da­men­tal rights in light of its inten­ded pur­po­se and rea­son­ab­ly fore­seeable misu­se, inclu­ding the pos­si­ble risks ari­sing from the inter­ac­tion bet­ween the AI system and the envi­ron­ment within which it ope­ra­tes. The risk manage­ment system should adopt the most appro­pria­te risk manage­ment mea­su­res in the light of the sta­te of the art in AI. When iden­ti­fy­ing the most appro­pria­te risk manage­ment mea­su­res, the pro­vi­der should docu­ment and explain the choices made and, when rele­vant, invol­ve experts and exter­nal stake­hol­ders. In iden­ti­fy­ing rea­son­ab­ly fore­seeable misu­se of high-risk AI systems the pro­vi­der should cover uses of the AI systems which, while not direct­ly cover­ed by the inten­ded pur­po­se and pro­vi­ded for in the ins­truc­tion for use may nevert­hel­ess be rea­son­ab­ly expec­ted to result from rea­di­ly pre­dic­ta­ble human beha­viour in the con­text of the spe­ci­fic cha­rac­te­ri­stics and use of the par­ti­cu­lar AI system. Any known or fore­seeable cir­cum­stances, rela­ted to the use of the high-risk AI system in accordance with its inten­ded pur­po­se or under con­di­ti­ons of rea­son­ab­ly fore­seeable misu­se, which may lead to risks to the health and safe­ty or fun­da­men­tal rights should be inclu­ded in the ins­truc­tions for use pro­vi­ded by the pro­vi­der. This is to ensu­re that the deployer is awa­re and takes them into account when using the high-risk AI system. Iden­ti­fy­ing and imple­men­ting risk miti­ga­ti­on mea­su­res for fore­seeable misu­se under this Regu­la­ti­on should not requi­re spe­ci­fic addi­tio­nal trai­ning mea­su­res for the high-risk AI system by the pro­vi­der to address them. The pro­vi­ders howe­ver are encou­ra­ged to con­sider such addi­tio­nal trai­ning mea­su­res to miti­ga­te rea­sonable fore­seeable misu­s­es as neces­sa­ry and appro­pria­te. (43) Requi­re­ments should app­ly to high-risk AI systems as regards risk manage­ment, the qua­li­ty and rele­van­ce of data sets used, tech­ni­cal docu­men­ta­ti­on and record-kee­ping, trans­pa­ren­cy and the pro­vi­si­on of infor­ma­ti­on to deployers, human over­sight, and robust­ness, accu­ra­cy and cyber­se­cu­ri­ty. Tho­se requi­re­ments are neces­sa­ry to effec­tively miti­ga­te the risks for health, safe­ty and fun­da­men­tal rights, and no other less trade rest­ric­ti­ve mea­su­res are rea­son­ab­ly available, thus avo­i­ding unju­sti­fi­ed rest­ric­tions to trade. (44) High qua­li­ty data and access to high qua­li­ty data plays a vital role in pro­vi­ding struc­tu­re and in ensu­ring the per­for­mance of many AI systems, espe­ci­al­ly when tech­ni­ques invol­ving the trai­ning of models are used, with a view to ensu­re that the high-risk AI system per­forms as inten­ded and safe­ly and it does not beco­me a source of dis­cri­mi­na­ti­on pro­hi­bi­ted by Uni­on law. High qua­li­ty data­sets for trai­ning, vali­da­ti­on and test­ing requi­re the imple­men­ta­ti­on of appro­pria­te data gover­nan­ce and manage­ment prac­ti­ces. Data­sets for trai­ning, vali­da­ti­on and test­ing, inclu­ding the labels, should be rele­vant, suf­fi­ci­ent­ly repre­sen­ta­ti­ve, and to the best ext­ent pos­si­ble free of errors and com­ple­te in view of the inten­ded pur­po­se of the system. In order to faci­li­ta­te com­pli­ance with EU data pro­tec­tion law, such as Regu­la­ti­on (EU) 2016/679, data gover­nan­ce and manage­ment prac­ti­ces should include, in the case of per­so­nal data, trans­pa­ren­cy about the ori­gi­nal pur­po­se of the data coll­ec­tion, The data­sets should also have the appro­pria­te sta­tis­ti­cal pro­per­ties, inclu­ding as regards the per­sons or groups of per­sons in rela­ti­on to whom the high-risk AI system is inten­ded to be used, with spe­ci­fic atten­ti­on to the miti­ga­ti­on of pos­si­ble bia­ses in the data­sets, that are likely to affect the health and safe­ty of per­sons, nega­tively impact fun­da­men­tal rights or lead to dis­cri­mi­na­ti­on pro­hi­bi­ted under Uni­on law, espe­ci­al­ly whe­re data out­puts influence inputs for future ope­ra­ti­ons (‘feed­back loops’) . Bia­ses can for exam­p­le be inher­ent in under­ly­ing data­sets, espe­ci­al­ly when histo­ri­cal data is being used, or gene­ra­ted when the systems are imple­men­ted in real world set­tings. Results pro­vi­ded by AI systems could be influen­ced by such inher­ent bia­ses that are inclined to gra­du­al­ly increa­se and ther­eby per­pe­tua­te and ampli­fy exi­sting dis­cri­mi­na­ti­on, in par­ti­cu­lar for per­sons belon­ging to cer­tain vul­nerable groups inclu­ding racial or eth­nic groups. The requi­re­ment for the data­sets to be to the best ext­ent pos­si­ble com­ple­te and free of errors should not affect the use of pri­va­cy-pre­ser­ving tech­ni­ques in the con­text of the deve­lo­p­ment and test­ing of AI systems. In par­ti­cu­lar, data­sets should take into account, to the ext­ent requi­red by their inten­ded pur­po­se, the fea­tures, cha­rac­te­ri­stics or ele­ments that are par­ti­cu­lar to the spe­ci­fic geo­gra­phi­cal, con­tex­tu­al, beha­viou­ral or func­tion­al set­ting which the AI system is inten­ded to be used. The requi­re­ments rela­ted to data gover­nan­ce can be com­plied with by having recour­se to third par­ties that offer cer­ti­fi­ed com­pli­ance ser­vices inclu­ding veri­fi­ca­ti­on of data gover­nan­ce, data set inte­gri­ty, and data trai­ning, vali­da­ti­on and test­ing prac­ti­ces, as far as com­pli­ance with the data requi­re­ments of this Regu­la­ti­on are ensu­red. (45) For the deve­lo­p­ment and assess­ment of high-risk AI systems, cer­tain actors, such as pro­vi­ders, noti­fi­ed bodies and other rele­vant enti­ties, such as digi­tal inno­va­ti­on hubs, test­ing expe­ri­men­ta­ti­on faci­li­ties and rese­ar­chers, should be able to access and use high qua­li­ty data­sets within their respec­ti­ve fields of acti­vi­ties which are rela­ted to this Regu­la­ti­on. Euro­pean com­mon data spaces estab­lished by the Com­mis­si­on and the faci­li­ta­ti­on of data sha­ring bet­ween busi­nesses and with govern­ment in the public inte­rest will be instru­men­tal to pro­vi­de trustful, accoun­ta­ble and non-dis­cri­mi­na­to­ry access to high qua­li­ty data for the trai­ning, vali­da­ti­on and test­ing of AI systems. For exam­p­le, in health, the Euro­pean health data space will faci­li­ta­te non-dis­cri­mi­na­to­ry access to health data and the trai­ning of arti­fi­ci­al intel­li­gence algo­rith­ms on tho­se data­sets, in a pri­va­cy-pre­ser­ving, secu­re, time­ly, trans­pa­rent and trust­wor­t­hy man­ner, and with an appro­pria­te insti­tu­tio­nal gover­nan­ce. Rele­vant com­pe­tent aut­ho­ri­ties, inclu­ding sec­to­ral ones, pro­vi­ding or sup­port­ing the access to data may also sup­port the pro­vi­si­on of high-qua­li­ty data for the trai­ning, vali­da­ti­on and test­ing of AI systems. (45a) The right to pri­va­cy and to pro­tec­tion of per­so­nal data must be gua­ran­teed throug­hout the enti­re life­cy­cle of the AI system. In this regard, the prin­ci­ples of data mini­mi­sa­ti­on and data pro­tec­tion by design and by default, as set out in Uni­on data pro­tec­tion law, are appli­ca­ble when per­so­nal data are pro­ce­s­sed. Mea­su­res taken by pro­vi­ders to ensu­re com­pli­ance with tho­se prin­ci­ples may include not only anony­mi­sa­ti­on and encryp­ti­on, but also the use of tech­no­lo­gy that per­mits algo­rith­ms to be brought to the data and allo­ws trai­ning of AI systems wit­hout the trans­mis­si­on bet­ween par­ties or copy­ing of the raw or struc­tu­red data them­sel­ves, wit­hout pre­ju­di­ce to the requi­re­ments on data gover­nan­ce pro­vi­ded for in this Regu­la­ti­on. (44c) In order to pro­tect the right of others from the dis­cri­mi­na­ti­on that might result from the bias in AI systems, the pro­vi­ders should, excep­tio­nal­ly, to the ext­ent that it is strict­ly neces­sa­ry for the pur­po­ses of ensu­ring bias detec­tion and cor­rec­tion in rela­ti­on to the high- risk AI systems, sub­ject to appro­pria­te safe­guards for the fun­da­men­tal rights and free­doms of natu­ral per­sons and fol­lo­wing the appli­ca­ti­on of all appli­ca­ble con­di­ti­ons laid down under this Regu­la­ti­on in addi­ti­on to the con­di­ti­ons laid down in Regu­la­ti­on (EU) 2016/679, Direc­ti­ve (EU) 2016/680 and Regu­la­ti­on (EU) 2018/1725,be able to pro­cess also spe­cial cate­go­ries of per­so­nal data, as a mat­ter of sub­stan­ti­al public inte­rest within the mea­ning of Artic­le 9(2)(g) of Regu­la­ti­on (EU) 2016/679 and Artic­le 10(2)g) of Regu­la­ti­on (EU) 2018/1725. (46) Having com­pre­hen­si­ble infor­ma­ti­on on how high-risk AI systems have been deve­lo­ped and how they per­form throug­hout their life­time is essen­ti­al to enable tracea­bi­li­ty of tho­se systems, veri­fy com­pli­ance with the requi­re­ments under this Regu­la­ti­on, as well as moni­to­ring of their ope­ra­ti­ons and post mar­ket moni­to­ring. This requi­res kee­ping records and the avai­la­bi­li­ty of a tech­ni­cal docu­men­ta­ti­on, con­tai­ning infor­ma­ti­on which is neces­sa­ry to assess the com­pli­ance of the AI system with the rele­vant requi­re­ments and faci­li­ta­te post mar­ket moni­to­ring. Such infor­ma­ti­on should include the gene­ral cha­rac­te­ri­stics, capa­bi­li­ties and limi­ta­ti­ons of the system, algo­rith­ms, data, trai­ning, test­ing and vali­da­ti­on pro­ce­s­ses used as well as docu­men­ta­ti­on on the rele­vant risk manage­ment system and drawn in a clear and com­pre­hen­si­ve form. The tech­ni­cal docu­men­ta­ti­on should be kept up to date, appro­pria­te­ly throug­hout the life­time of the AI system. Fur­ther­mo­re, high risk AI systems should tech­ni­cal­ly allow for auto­ma­tic recor­ding of events (logs) over the dura­ti­on of the life­time of the system. (47) To address con­cerns rela­ted to opa­ci­ty and com­ple­xi­ty of cer­tain AI systems and help deployers to ful­fil their obli­ga­ti­ons under this Regu­la­ti­on, trans­pa­ren­cy should be requi­red for high-risk AI systems befo­re they are pla­ced on the mar­ket or put it into ser­vice. High- risk AI systems should be desi­gned in a man­ner to enable deployers to under­stand how the AI system works, eva­lua­te its func­tion­a­li­ty, and com­pre­hend its strengths and limi­ta­ti­ons. High-risk AI systems should be accom­pa­nied by appro­pria­te infor­ma­ti­on in the form of ins­truc­tions of use. Such infor­ma­ti­on should include the cha­rac­te­ri­stics, capa­bi­li­ties and limi­ta­ti­ons of per­for­mance of the AI system. The­se would cover infor­ma­ti­on on pos­si­ble known and fore­seeable cir­cum­stances rela­ted to the use of the high-risk AI system, inclu­ding deployer action that may influence system beha­viour and per­for­mance, under which the AI system can lead to risks to health, safe­ty, and fun­da­men­tal rights, on the chan­ges that have been pre-deter­mi­ned and asses­sed for con­for­mi­ty by the pro­vi­der and on the rele­vant human over­sight mea­su­res, inclu­ding the mea­su­res to faci­li­ta­te the inter­pre­ta­ti­on of the out­puts of the AI system by the deployers. Trans­pa­ren­cy, inclu­ding the accom­pany­ing ins­truc­tions for use, should assist deployers in the use of the system and sup­port infor­med decis­i­on making by them. Among others, deployers should be in a bet­ter posi­ti­on to make the cor­rect choice of the system they intend to use in the light of the obli­ga­ti­ons appli­ca­ble to them, be edu­ca­ted about the inten­ded and pre­clu­ded uses, and use the AI system cor­rect­ly and as appro­pria­te. In order to enhan­ce legi­bi­li­ty and acce­s­si­bi­li­ty of the infor­ma­ti­on inclu­ded in the ins­truc­tions of use, whe­re appro­pria­te, illu­stra­ti­ve examp­les, for instance on the limi­ta­ti­ons and on the inten­ded and pre­clu­ded uses of the AI system, should be inclu­ded. Pro­vi­ders should ensu­re that all docu­men­ta­ti­on, inclu­ding the ins­truc­tions for use, con­ta­ins meaningful, com­pre­hen­si­ve, acce­s­si­ble and under­stan­da­ble infor­ma­ti­on, taking into account the needs and fore­seeable know­ledge of the tar­get deployers. Ins­truc­tions for use should be made available in a lan­guage which can be easi­ly under­s­tood by tar­get deployers, as deter­mi­ned by the Mem­ber Sta­te con­cer­ned. (48) High-risk AI systems should be desi­gned and deve­lo­ped in such a way that natu­ral per­sons can over­see their func­tio­ning, ensu­re that they are used as inten­ded and that their impacts are addres­sed over the system’s life­cy­cle. For this pur­po­se, appro­pria­te human over­sight mea­su­res should be iden­ti­fi­ed by the pro­vi­der of the system befo­re its pla­cing on the mar­ket or put­ting into ser­vice. In par­ti­cu­lar, whe­re appro­pria­te, such mea­su­res should gua­ran­tee that the system is sub­ject to in-built ope­ra­tio­nal cons­traints that can­not be over­ridden by the system its­elf and is respon­si­ve to the human ope­ra­tor, and that the natu­ral per­sons to whom human over­sight has been assi­gned have the neces­sa­ry com­pe­tence, trai­ning and aut­ho­ri­ty to car­ry out that role. It is also essen­ti­al, as appro­pria­te, to ensu­re that high-risk AI systems include mecha­nisms to gui­de and inform a natu­ral per­son to whom human over­sight has been assi­gned to make infor­med decis­i­ons if, when and how to inter­ve­ne in order to avo­id nega­ti­ve con­se­quen­ces or risks, or stop the system if it does not per­form as inten­ded. Con­side­ring the signi­fi­cant con­se­quen­ces for per­sons in case of incor­rect matches by cer­tain bio­me­tric iden­ti­fi­ca­ti­on systems, it is appro­pria­te to pro­vi­de for an enhan­ced human over­sight requi­re­ment for tho­se systems so that no action or decis­i­on may be taken by the deployer on the basis of the iden­ti­fi­ca­ti­on resul­ting from the system unless this has been sepa­ra­te­ly veri­fi­ed and con­firm­ed by at least two natu­ral per­sons. Tho­se per­sons could be from one or more enti­ties and include the per­son ope­ra­ting or using the system. This requi­re­ment should not pose unneces­sa­ry bur­den or delays and it could be suf­fi­ci­ent that the sepa­ra­te veri­fi­ca­ti­ons by the dif­fe­rent per­sons are auto­ma­ti­cal­ly recor­ded in the logs gene­ra­ted by the system. Given the spe­ci­fi­ci­ties of the are­as of law enforce­ment, migra­ti­on, bor­der con­trol and asyl­um, this requi­re­ment should not app­ly in cases whe­re Uni­on or natio­nal law con­siders the appli­ca­ti­on of this requi­re­ment to be dis­pro­por­tio­na­te. (49) High-risk AI systems should per­form con­sist­ent­ly throug­hout their life­cy­cle and meet an appro­pria­te level of accu­ra­cy, robust­ness and cyber­se­cu­ri­ty, in the light of their inten­ded pur­po­se and in accordance with the gene­ral­ly ack­now­led­ged sta­te of the art. The Com­mis­si­on and rele­vant orga­ni­sa­ti­ons and stake­hol­ders are encou­ra­ged to take due con­side­ra­ti­on of miti­ga­ti­on of risks and nega­ti­ve impacts of the AI system. The expec­ted level of per­for­mance metrics should be declared in the accom­pany­ing ins­truc­tions of use. Pro­vi­ders are urged to com­mu­ni­ca­te this infor­ma­ti­on to deployers in a clear and easi­ly under­stan­da­ble way, free of misun­derstan­dings or mis­lea­ding state­ments. The EU legis­la­ti­on on legal metro­lo­gy, inclu­ding on Mea­su­ring Instru­ments Direc­ti­ve (MID) and Non-auto­ma­tic weig­hing instru­ments (NAWI) Direc­ti­ve, aims to ensu­re the accu­ra­cy of mea­su­re­ments and to help the trans­pa­ren­cy and fair­ness of com­mer­cial tran­sac­tions. In this con­text, in coope­ra­ti­on with rele­vant stake­hol­ders and orga­ni­sa­ti­on, such as metro­lo­gy and bench­mar­king aut­ho­ri­ties, the Com­mis­si­on should encou­ra­ge, as appro­pria­te, the deve­lo­p­ment of bench­marks and mea­su­re­ment metho­do­lo­gies for AI systems. In doing so, the Com­mis­si­on should take note and col­la­bo­ra­te with inter­na­tio­nal part­ners working on metro­lo­gy and rele­vant mea­su­re­ment indi­ca­tors rela­ting to Arti­fi­ci­al Intel­li­gence. (50) The tech­ni­cal robust­ness is a key requi­re­ment for high-risk AI systems. They should be resi­li­ent in rela­ti­on to harmful or other­wi­se unde­si­ra­ble beha­viour that may result from limi­ta­ti­ons within the systems or the envi­ron­ment in which the systems ope­ra­te (e.g. errors, faults, incon­si­sten­ci­es, unex­pec­ted situa­tions). The­r­e­fo­re, tech­ni­cal and orga­ni­sa­tio­nal mea­su­res should be taken to ensu­re robust­ness of high-risk AI systems, for exam­p­le by desig­ning and deve­lo­ping appro­pria­te tech­ni­cal solu­ti­ons to pre­vent or mini­mi­ze harmful or other­wi­se unde­si­ra­ble beha­viour. Tho­se tech­ni­cal solu­ti­on may include for instance mecha­nisms enab­ling the system to safe­ly inter­rupt its ope­ra­ti­on (fail-safe plans) in the pre­sence of cer­tain anoma­lies or when ope­ra­ti­on takes place out­side cer­tain pre­de­ter­mi­ned boun­da­ries. Fail­ure to pro­tect against the­se risks could lead to safe­ty impacts or nega­tively affect the fun­da­men­tal rights, for exam­p­le due to erro­n­eous decis­i­ons or wrong or bia­sed out­puts gene­ra­ted by the AI system. (51) Cyber­se­cu­ri­ty plays a cru­cial role in ensu­ring that AI systems are resi­li­ent against attempts to alter their use, beha­viour, per­for­mance or com­pro­mi­se their secu­ri­ty pro­per­ties by mali­cious third par­ties exploi­ting the system’s vul­nerabi­li­ties. Cyber­at­tacks against AI systems can levera­ge AI spe­ci­fic assets, such as trai­ning data sets (e.g. data poi­so­ning) or trai­ned models (e.g. adver­sa­ri­al attacks or mem­ber­ship infe­rence), or exploit vul­nerabi­li­ties in the AI system’s digi­tal assets or the under­ly­ing ICT infras­truc­tu­re. To ensu­re a level of cyber­se­cu­ri­ty appro­pria­te to the risks, sui­ta­ble mea­su­res, such as secu­ri­ty con­trols, should the­r­e­fo­re be taken by the pro­vi­ders of high-risk AI systems, also taking into account as appro­pria­te the under­ly­ing ICT infras­truc­tu­re. (51a) Wit­hout pre­ju­di­ce to the requi­re­ments rela­ted to robust­ness and accu­ra­cy set out in this Regu­la­ti­on, high-risk AI systems which fall within the scope of the Regu­la­ti­on 2022/0272, in accordance with Artic­le 8 of the Regu­la­ti­on 2022/0272 may demon­stra­te com­pli­ance with the cyber­se­cu­ri­ty requi­re­ment of this Regu­la­ti­on by ful­fil­ling the essen­ti­al cyber­se­cu­ri­ty requi­re­ments set out in Artic­le 10 and Annex I of the Regu­la­ti­on 2022/0272.When high-risk AI systems ful­fil the essen­ti­al requi­re­ments of Regu­la­ti­on 2022/0272, they should be dee­med com­pli­ant with the cyber­se­cu­ri­ty requi­re­ments set out in this Regu­la­ti­on in so far as the achie­ve­ment of tho­se requi­re­ments is demon­stra­ted in the EU decla­ra­ti­on of con­for­mi­ty or parts the­reof issued under Regu­la­ti­on 2022/0272. For this pur­po­se, the assess­ment of the cyber­se­cu­ri­ty risks, asso­cia­ted to a pro­duct with digi­tal ele­ments clas­si­fi­ed as high-risk AI system accor­ding to this Regu­la­ti­on, car­ri­ed out under Regu­la­ti­on 2022/0272, should con­sider risks to the cyber resi­li­ence of an AI system as regards attempts by unaut­ho­ri­sed third par­ties to alter its use, beha­viour or per­for­mance, inclu­ding AI spe­ci­fic vul­nerabi­li­ties such as data poi­so­ning or adver­sa­ri­al attacks, as well as, as rele­vant, risks to fun­da­men­tal rights as requi­red by this Regu­la­ti­on. The con­for­mi­ty assess­ment pro­ce­du­re pro­vi­ded by this Regu­la­ti­on should app­ly in rela­ti­on to the essen­ti­al cyber­se­cu­ri­ty requi­re­ments of a pro­duct with digi­tal ele­ments cover­ed by Regu­la­ti­on 2022/0272 and clas­si­fi­ed as a high-risk AI system under this Regu­la­ti­on. Howe­ver, this rule should not result in redu­cing the neces­sa­ry level of assu­rance for cri­ti­cal pro­ducts with digi­tal ele­ments cover­ed by Regu­la­ti­on 2022/0272. The­r­e­fo­re, by way of dero­ga­ti­on from this rule, high-risk AI systems that fall within the scope of this Regu­la­ti­on and are also qua­li­fi­ed as important and cri­ti­cal pro­ducts with digi­tal ele­ments pur­su­ant to Regu­la­ti­on 2022/0272 and to which the con­for­mi­ty assess­ment pro­ce­du­re based on inter­nal con­trol refer­red to in Annex VI of this Regu­la­ti­on applies, are sub­ject to the con­for­mi­ty assess­ment pro­vi­si­ons of Regu­la­ti­on 2022/0272 inso­far as the essen­ti­al cyber­se­cu­ri­ty requi­re­ments of Regu­la­ti­on 2022/0272 are con­cer­ned. In this case, for all the other aspects cover­ed by this Regu­la­ti­on the respec­ti­ve pro­vi­si­ons on con­for­mi­ty assess­ment based on inter­nal con­trol set out in Annex VI of this Regu­la­ti­on should app­ly. Buil­ding on the know­ledge and exper­ti­se of ENISA on the cyber­se­cu­ri­ty poli­cy and tasks assi­gned to ENISA under the Regu­la­ti­on 2019/1020 the Euro­pean Com­mis­si­on should coope­ra­te with ENISA on issues rela­ted to cyber­se­cu­ri­ty of AI systems. (52) As part of Uni­on har­mo­ni­sa­ti­on legis­la­ti­on, rules appli­ca­ble to the pla­cing on the mar­ket, put­ting into ser­vice and use of high-risk AI systems should be laid down con­sist­ent­ly with Regu­la­ti­on (EC) No 765/2008 of the Euro­pean Par­lia­ment and of the Council22 set­ting out the requi­re­ments for accre­di­ta­ti­on and the mar­ket sur­veil­lan­ce of pro­ducts, Decis­i­on No 768/2008/EC of the Euro­pean Par­lia­ment and of the Council23 on a com­mon frame­work for the mar­ke­ting of pro­ducts and Regu­la­ti­on (EU) 2019/1020 of the Euro­pean Par­lia­ment and of the Council24 on mar­ket sur­veil­lan­ce and com­pli­ance of pro­ducts (‘New Legis­la­ti­ve Frame­work for the mar­ke­ting of pro­ducts’). (53) It is appro­pria­te that a spe­ci­fic natu­ral or legal per­son, defi­ned as the pro­vi­der, takes the respon­si­bi­li­ty for the pla­cing on the mar­ket or put­ting into ser­vice of a high-risk AI system, regard­less of whe­ther that natu­ral or legal per­son is the per­son who desi­gned or deve­lo­ped the system. (53a) As signa­to­ries to the United Nati­ons Con­ven­ti­on on the Rights of Per­sons with Disa­bi­li­ties (UNCRPD), the Uni­on and the Mem­ber Sta­tes are legal­ly obli­ged to pro­tect per­sons with disa­bi­li­ties from dis­cri­mi­na­ti­on and pro­mo­te their equa­li­ty, to ensu­re that per­sons with disa­bi­li­ties have access, on an equal basis with others, to infor­ma­ti­on and com­mu­ni­ca­ti­ons tech­no­lo­gies and systems, and to ensu­re respect for pri­va­cy for per­sons with disa­bi­li­ties. Given the gro­wing importance and use of AI systems, the appli­ca­ti­on of uni­ver­sal design prin­ci­ples to all new tech­no­lo­gies and ser­vices should ensu­re full and equal access for ever­yo­ne poten­ti­al­ly affec­ted by or using AI tech­no­lo­gies, inclu­ding per­sons with disa­bi­li­ties, in a way that takes full account of their inher­ent dignity and diver­si­ty. It is the­r­e­fo­re essen­ti­al that Pro­vi­ders ensu­re full com­pli­ance with acce­s­si­bi­li­ty requi­re­ments, inclu­ding Direc­ti­ve (EU) 2016/2102 and Direc­ti­ve (EU) 2019/882. Pro­vi­ders should ensu­re com­pli­ance with the­se requi­re­ments by design. The­r­e­fo­re, the neces­sa­ry mea­su­res should be inte­gra­ted as much as pos­si­ble into the design of the high-risk AI system. (54) The pro­vi­der should estab­lish a sound qua­li­ty manage­ment system, ensu­re the accom­plish­ment of the requi­red con­for­mi­ty assess­ment pro­ce­du­re, draw up the rele­vant docu­men­ta­ti­on and estab­lish a robust post-mar­ket moni­to­ring system. Pro­vi­ders of high- risk AI systems that are sub­ject to obli­ga­ti­ons regar­ding qua­li­ty manage­ment systems under rele­vant sec­to­ri­al Uni­on law should have the pos­si­bi­li­ty to include the ele­ments of the qua­li­ty manage­ment system pro­vi­ded for in this Regu­la­ti­on as part of the exi­sting qua­li­ty manage­ment system pro­vi­ded for in that other sec­to­ri­al Uni­on legis­la­ti­on. The com­ple­men­ta­ri­ty bet­ween this Regu­la­ti­on and exi­sting sec­to­ri­al Uni­on law should also be taken into account in future stan­dar­dizati­on acti­vi­ties or gui­dance adopted by the Com­mis­si­on. Public aut­ho­ri­ties which put into ser­vice high-risk AI systems for their own use may adopt and imple­ment the rules for the qua­li­ty manage­ment system as part of the qua­li­ty manage­ment system adopted at a natio­nal or regio­nal level, as appro­pria­te, taking into account the spe­ci­fi­ci­ties of the sec­tor and the com­pe­ten­ces and orga­ni­sa­ti­on of the public aut­ho­ri­ty in que­sti­on. (56) To enable enforce­ment of this Regu­la­ti­on and crea­te a level-play­ing field for ope­ra­tors, and taking into account the dif­fe­rent forms of making available of digi­tal pro­ducts, it is important to ensu­re that, under all cir­cum­stances, a per­son estab­lished in the Uni­on can pro­vi­de aut­ho­ri­ties with all the neces­sa­ry infor­ma­ti­on on the com­pli­ance of an AI system. The­r­e­fo­re, pri­or to making their AI systems available in the Uni­on, pro­vi­ders estab­lished out­side the Uni­on shall, by writ­ten man­da­te, appoint an aut­ho­ri­sed repre­sen­ta­ti­ve estab­lished in the Uni­on. This aut­ho­ri­sed repre­sen­ta­ti­ve plays a pivo­tal role in ensu­ring the com­pli­ance of the high-risk AI systems pla­ced on the mar­ket or put into ser­vice in the Uni­on by tho­se pro­vi­ders who are not estab­lished in the Uni­on and in ser­ving as their cont­act per­son estab­lished in the Uni­on. (56a) In the light of the natu­re and com­ple­xi­ty of the value chain for AI systems and in line with New Legis­la­ti­ve Frame­work prin­ci­ples, it is essen­ti­al to ensu­re legal cer­tain­ty and faci­li­ta­te the com­pli­ance with this Regu­la­ti­on. The­r­e­fo­re, it is neces­sa­ry to cla­ri­fy the role and the spe­ci­fic obli­ga­ti­ons of rele­vant ope­ra­tors along the value chain, such as importers and dis­tri­bu­tors who may con­tri­bu­te to the deve­lo­p­ment of AI systems. In cer­tain situa­tions tho­se ope­ra­tors could act in more than one role at the same time and should the­r­e­fo­re ful­fil cumu­la­tively all rele­vant obli­ga­ti­ons asso­cia­ted with tho­se roles. For exam­p­le, an ope­ra­tor could act as a dis­tri­bu­tor and an importer at the same time. (57) To ensu­re legal cer­tain­ty, it is neces­sa­ry to cla­ri­fy that, under cer­tain spe­ci­fic con­di­ti­ons, any dis­tri­bu­tor, importer, deployer or other third-par­ty should be con­side­red a pro­vi­der of a high-risk AI system and the­r­e­fo­re assu­me all the rele­vant obli­ga­ti­ons. This would be the case if that par­ty puts its name or trade­mark on a high-risk AI system alre­a­dy pla­ced on the mar­ket or put into ser­vice, wit­hout pre­ju­di­ce to con­trac­tu­al arran­ge­ments sti­pu­la­ting that the obli­ga­ti­ons are allo­ca­ted other­wi­se, or if that par­ty make a sub­stan­ti­al modi­fi­ca­ti­on to a high-risk AI system that has alre­a­dy been pla­ced on the mar­ket or has alre­a­dy been put into ser­vice and in a way that it remains a high-risk AI system in accordance with Artic­le 6, or if it modi­fi­es the inten­ded pur­po­se of an AI system, inclu­ding a gene­ral pur­po­se AI system, which has not been clas­si­fi­ed as high-risk and has alre­a­dy been pla­ced on the mar­ket or put into ser­vice, in a way that the AI system beco­mes a high-risk AI system in accordance with Artic­le 6. The­se pro­vi­si­ons should app­ly wit­hout pre­ju­di­ce to more spe­ci­fic pro­vi­si­ons estab­lished in cer­tain New Legis­la­ti­ve Frame­work sec­to­ri­al legis­la­ti­on with which this Regu­la­ti­on should app­ly joint­ly. For exam­p­le, Artic­le 16, para­graph 2 of Regu­la­ti­on 745/2017, estab­li­shing that cer­tain chan­ges should not be con­side­red modi­fi­ca­ti­ons of a device that could affect its com­pli­ance with the appli­ca­ble requi­re­ments, should con­ti­n­ue to app­ly to high-risk AI systems that are medi­cal devices within the mea­ning of that Regu­la­ti­on. (57a) Gene­ral pur­po­se AI systems may be used as high-risk AI systems by them­sel­ves or be com­pon­ents of other high risk AI systems. The­r­e­fo­re, due to their par­ti­cu­lar natu­re and in order to ensu­re a fair sha­ring of respon­si­bi­li­ties along the AI value chain, the pro­vi­ders of such systems should, irre­spec­ti­ve of whe­ther they may be used as high-risk AI systems as such by other pro­vi­ders or as com­pon­ents of high-risk AI systems and unless pro­vi­ded other­wi­se under this Regu­la­ti­on, clo­se­ly coope­ra­te with the pro­vi­ders of the respec­ti­ve high-risk AI systems to enable their com­pli­ance with the rele­vant obli­ga­ti­ons under this Regu­la­ti­on and with the com­pe­tent aut­ho­ri­ties estab­lished under this Regu­la­ti­on. (57b) Whe­re, under the con­di­ti­ons laid down in this Regu­la­ti­on, the pro­vi­der that initi­al­ly pla­ced the AI system on the mar­ket or put it into ser­vice should no lon­ger be con­side­red the pro­vi­der for the pur­po­ses of this Regu­la­ti­on, and when that pro­vi­der has not express­ly exclu­ded the chan­ge of the AI system into a high-risk AI system, the for­mer pro­vi­der should none­thel­ess clo­se­ly coope­ra­te and make available the neces­sa­ry infor­ma­ti­on and pro­vi­de the rea­son­ab­ly expec­ted tech­ni­cal access and other assi­stance that are requi­red for the ful­film­ent of the obli­ga­ti­ons set out in this Regu­la­ti­on, in par­ti­cu­lar regar­ding the com­pli­ance with the con­for­mi­ty assess­ment of high-risk AI systems. (57c) In addi­ti­on, whe­re a high-risk AI system that is a safe­ty com­po­nent of a pro­duct which is cover­ed by a rele­vant New Legis­la­ti­ve Frame­work sec­to­ri­al legis­la­ti­on is not pla­ced on the mar­ket or put into ser­vice inde­pendent­ly from the pro­duct, the pro­duct manu­fac­tu­rer as defi­ned under the rele­vant New Legis­la­ti­ve Frame­work legis­la­ti­on should com­ply with the obli­ga­ti­ons of the pro­vi­der estab­lished in this Regu­la­ti­on and nota­b­ly ensu­re that the AI system embedded in the final pro­duct com­plies with the requi­re­ments of this Regu­la­ti­on. (57d) Within the AI value chain mul­ti­ple par­ties often sup­p­ly AI systems, tools and ser­vices but also com­pon­ents or pro­ce­s­ses that are incor­po­ra­ted by the pro­vi­der into the AI system with various objec­ti­ves, inclu­ding the model trai­ning, model retrai­ning, model test­ing and eva­lua­ti­on, inte­gra­ti­on into soft­ware, or other aspects of model deve­lo­p­ment. The­se par­ties have an important role in the value chain towards the pro­vi­der of the high-risk AI system into which their AI systems, tools, ser­vices, com­pon­ents or pro­ce­s­ses are inte­gra­ted, and should pro­vi­de by writ­ten agree­ment this pro­vi­der with the neces­sa­ry infor­ma­ti­on, capa­bi­li­ties, tech­ni­cal access and other assi­stance based on the gene­ral­ly ack­now­led­ged sta­te of the art, in order to enable the pro­vi­der to ful­ly com­ply with the obli­ga­ti­ons set out in this Regu­la­ti­on, wit­hout com­pro­mi­sing their own intellec­tu­al pro­per­ty rights or trade secrets. (57e) Third par­ties making acce­s­si­ble to the public tools, ser­vices, pro­ce­s­ses, or AI com­pon­ents other than gene­ral-pur­po­se AI models, shall not be man­da­ted to com­ply with requi­re­ments tar­ge­ting the respon­si­bi­li­ties along the AI value chain, in par­ti­cu­lar towards the pro­vi­der that has used or inte­gra­ted them, when tho­se tools, ser­vices, pro­ce­s­ses, or AI com­pon­ents are made acce­s­si­ble under a free and open licence. Deve­lo­pers of free and open-source tools, ser­vices, pro­ce­s­ses, or AI com­pon­ents other than gene­ral-pur­po­se AI models should be encou­ra­ged to imple­ment wide­ly adopted docu­men­ta­ti­on prac­ti­ces, such as model cards and data sheets, as a way to acce­le­ra­te infor­ma­ti­on sha­ring along the AI value chain, allo­wing the pro­mo­ti­on of trust­wor­t­hy AI systems in the Uni­on. (57f) The Com­mis­si­on could deve­lop and recom­mend vol­un­t­a­ry model con­trac­tu­al terms bet­ween pro­vi­ders of high-risk AI systems and third par­ties that sup­p­ly tools, ser­vices, com­pon­ents or pro­ce­s­ses that are used or inte­gra­ted in high-risk AI systems, to faci­li­ta­te the coope­ra­ti­on along the value chain. When deve­lo­ping vol­un­t­a­ry model con­trac­tu­al terms, the Com­mis­si­on should also take into account pos­si­ble con­trac­tu­al requi­re­ments appli­ca­ble in spe­ci­fic sec­tors or busi­ness cases. (58) Given the natu­re of AI systems and the risks to safe­ty and fun­da­men­tal rights pos­si­bly asso­cia­ted with their use, inclu­ding as regards the need to ensu­re pro­per moni­to­ring of the per­for­mance of an AI system in a real-life set­ting, it is appro­pria­te to set spe­ci­fic respon­si­bi­li­ties for deployers. Deployers should in par­ti­cu­lar take appro­pria­te tech­ni­cal and orga­ni­sa­tio­nal mea­su­res to ensu­re they use high-risk AI systems in accordance with the ins­truc­tions of use and cer­tain other obli­ga­ti­ons should be pro­vi­ded for with regard to moni­to­ring of the func­tio­ning of the AI systems and with regard to record-kee­ping, as appro­pria­te. Fur­ther­mo­re, deployers should ensu­re that the per­sons assi­gned to imple­ment the ins­truc­tions for use and human over­sight as set out in this Regu­la­ti­on have the neces­sa­ry com­pe­tence, in par­ti­cu­lar an ade­qua­te level of AI liter­a­cy, trai­ning and aut­ho­ri­ty to pro­per­ly ful­fil tho­se tasks. The­se obli­ga­ti­ons should be wit­hout pre­ju­di­ce to other deployer obli­ga­ti­ons in rela­ti­on to high-risk AI systems under Uni­on or natio­nal law. (58b) This Regu­la­ti­on is wit­hout pre­ju­di­ce to obli­ga­ti­ons for employers to inform or to inform and con­sult workers or their repre­sen­ta­ti­ves under Uni­on or natio­nal law and prac­ti­ce, inclu­ding direc­ti­ve 2002/14/EC on a gene­ral frame­work for informing and con­sul­ting employees, on decis­i­ons to put into ser­vice or use AI systems. It remains neces­sa­ry to ensu­re infor­ma­ti­on of workers and their repre­sen­ta­ti­ves on the plan­ned deployment of high-risk AI systems at the work­place in cases whe­re the con­di­ti­ons for tho­se infor­ma­ti­on or infor­ma­ti­on and con­sul­ta­ti­on obli­ga­ti­ons in other legal instru­ments are not ful­fil­led. Moreo­ver, such infor­ma­ti­on right is ancil­la­ry and neces­sa­ry to the objec­ti­ve of pro­tec­ting fun­da­men­tal rights that under­lies this Regu­la­ti­on. The­r­e­fo­re, an infor­ma­ti­on requi­re­ment to that effect should be laid down in this regu­la­ti­on, wit­hout affec­ting any exi­sting rights of workers. (58b) Whilst risks rela­ted to AI systems can result from the way such systems are desi­gned, risks can as well stem from how such AI systems are used. Deployers of high-risk AI system the­r­e­fo­re play a cri­ti­cal role in ensu­ring that fun­da­men­tal rights are pro­tec­ted, com­ple­men­ting the obli­ga­ti­ons of the pro­vi­der when deve­lo­ping the AI system. Deployers are best pla­ced to under­stand how the high-risk AI system will be used con­cre­te­ly and can the­r­e­fo­re iden­ti­fy poten­ti­al signi­fi­cant risks that were not fore­seen in the deve­lo­p­ment pha­se, due to a more pre­cise know­ledge of the con­text of use, the peo­p­le or groups of peo­p­le likely to be affec­ted, inclu­ding vul­nerable groups. Deployers of high-risk AI systems refer­red to in Annex III also play a cri­ti­cal role in informing natu­ral per­sons and should, when they make decis­i­ons or assist in making decis­i­ons rela­ted to natu­ral per­sons, whe­re appli­ca­ble, inform the natu­ral per­sons that they are sub­ject to the use of the high risk AI system. This infor­ma­ti­on should include the inten­ded pur­po­se and the type of decis­i­ons it makes. The deployer should also inform the natu­ral per­son about its right to an expl­ana­ti­on pro­vi­ded under this Regu­la­ti­on. With regard to high-risk AI systems used for law enforce­ment pur­po­ses, this obli­ga­ti­on should be imple­men­ted in accordance with Artic­le 13 of Direc­ti­ve 2016/680. (58d) Any pro­ce­s­sing of bio­me­tric data invol­ved in the use of AI systems for bio­me­tric iden­ti­fi­ca­ti­on for the pur­po­se of law enforce­ment needs to com­ply with Artic­le 10 of Direc­ti­ve (EU) 2016/680, that allo­ws such pro­ce­s­sing only whe­re strict­ly neces­sa­ry, sub­ject to appro­pria­te safe­guards for the rights and free­doms of the data sub­ject, and whe­re aut­ho­ri­sed by Uni­on or Mem­ber Sta­te law. Such use, when aut­ho­ri­zed, also needs to respect the prin­ci­ples laid down in Artic­le 4 para­graph 1 of Direc­ti­ve (EU) 2016/680 inclu­ding lawful­ness, fair­ness and trans­pa­ren­cy, pur­po­se limi­ta­ti­on, accu­ra­cy and sto­rage limi­ta­ti­on. (58e) Wit­hout pre­ju­di­ce to appli­ca­ble Uni­on law, nota­b­ly the GDPR and Direc­ti­ve (EU) 2016/680 (the Law Enforce­ment Direc­ti­ve), con­side­ring the intru­si­ve natu­re of post remo­te bio­me­tric iden­ti­fi­ca­ti­on systems, the use of post remo­te bio­me­tric iden­ti­fi­ca­ti­on systems shall be sub­ject to safe­guards. Post bio­me­tric iden­ti­fi­ca­ti­on systems should always be used in a way that is pro­por­tio­na­te, legi­ti­ma­te and strict­ly neces­sa­ry, and thus tar­ge­ted, in terms of the indi­vi­du­als to be iden­ti­fi­ed, the loca­ti­on, tem­po­ral scope and based on a clo­sed data­set of legal­ly acqui­red video foota­ge. In any case, post remo­te bio­me­tric iden­ti­fi­ca­ti­on systems should not be used in the frame­work of law enforce­ment to lead to indis­cri­mi­na­te sur­veil­lan­ce. The con­di­ti­ons for post remo­te bio­me­tric iden­ti­fi­ca­ti­on should in any case not pro­vi­de a basis to cir­cum­vent the con­di­ti­ons of the pro­hi­bi­ti­on and strict excep­ti­ons for real time remo­te bio­me­tric iden­ti­fi­ca­ti­on. (58g) In order to effi­ci­ent­ly ensu­re that fun­da­men­tal rights are pro­tec­ted, deployers of high-risk AI systems that are bodies gover­ned by public law, or pri­va­te ope­ra­tors pro­vi­ding public ser­vices and ope­ra­tors deploying cer­tain high-risk AI system refer­red to in Annex III, such as ban­king or insu­rance enti­ties, should car­ry out a fun­da­men­tal rights impact assess­ment pri­or to put­ting it into use. Ser­vices important for indi­vi­du­als that are of public natu­re may also be pro­vi­ded by pri­va­te enti­ties. Pri­va­te ope­ra­tors pro­vi­ding such ser­vices of public natu­re are lin­ked to tasks in the public inte­rest such as in the area of edu­ca­ti­on, heal­th­ca­re, social ser­vices, housing, admi­ni­stra­ti­on of justi­ce. The aim of the fun­da­men­tal rights impact assess­ment is for the deployer to iden­ti­fy the spe­ci­fic risks to the rights of indi­vi­du­als or groups of indi­vi­du­als likely to be affec­ted, iden­ti­fy mea­su­res to be taken in case of the mate­ria­li­sa­ti­on of the­se risks. The impact assess­ment should app­ly to the first use of the high-risk AI system, and should be updated when the deployer con­siders that any of the rele­vant fac­tors have chan­ged. The impact assess­ment should iden­ti­fy the deployer’s rele­vant pro­ce­s­ses in which the high-risk AI system will be used in line with its inten­ded pur­po­se, and should include a descrip­ti­on of the peri­od of time and fre­quen­cy in which the system is inten­ded to be used as well as of spe­ci­fic cate­go­ries of natu­ral per­sons and groups who are likely to be affec­ted in the spe­ci­fic con­text of use. The assess­ment should also include the iden­ti­fi­ca­ti­on of spe­ci­fic risks of harm likely to impact the fun­da­men­tal rights of the­se per­sons or groups. While per­forming this assess­ment, the deployer should take into account infor­ma­ti­on rele­vant to a pro­per assess­ment of impact, inclu­ding but not limi­t­ed to the infor­ma­ti­on given by the pro­vi­der of the high-risk AI system in the ins­truc­tions for use. In light of the risks iden­ti­fi­ed, deployers should deter­mi­ne mea­su­res to be taken in case of the mate­ria­lizati­on of the­se risks, inclu­ding for exam­p­le gover­nan­ce arran­ge­ments in that spe­ci­fic con­text of use, such as arran­ge­ments for human over­sight accor­ding to the ins­truc­tions of use or, com­plaint hand­ling and redress pro­ce­du­res, as they could be instru­men­tal in miti­ga­ting risks to fun­da­men­tal rights in con­cre­te use-cases. After per­forming this impact assess­ment, the deployer should noti­fy the rele­vant mar­ket sur­veil­lan­ce aut­ho­ri­ty. Whe­re appro­pria­te, to coll­ect rele­vant infor­ma­ti­on neces­sa­ry to per­form the impact assess­ment, deployers of high-risk AI system, in par­ti­cu­lar when AI systems are used in the public sec­tor, could invol­ve rele­vant stake­hol­ders, inclu­ding the repre­sen­ta­ti­ves of groups of per­sons likely to be affec­ted by the AI system, inde­pen­dent experts, and civil socie­ty orga­ni­sa­ti­ons in con­duc­ting such impact assess­ments and desig­ning mea­su­res to be taken in the case of mate­ria­lizati­on of the risks. The AI Office should deve­lop a tem­p­la­te for a que­sti­on­n­aire in order to faci­li­ta­te com­pli­ance and redu­ce the admi­ni­stra­ti­ve bur­den for deployers. (60a) The noti­on of gene­ral pur­po­se AI models should be cle­ar­ly defi­ned and set apart from the noti­on of AI systems to enable legal cer­tain­ty. The defi­ni­ti­on should be based on the key func­tion­al cha­rac­te­ri­stics of a gene­ral-pur­po­se AI model, in par­ti­cu­lar the gene­ra­li­ty and the capa­bi­li­ty to com­pe­tent­ly per­form a wide ran­ge of distinct tasks. The­se models are typi­cal­ly trai­ned on lar­ge amounts of data, through various methods, such as self- super­vi­sed, unsu­per­vi­sed or rein­force­ment lear­ning. Gene­ral pur­po­se AI models may be pla­ced on the mar­ket in various ways, inclu­ding through libra­ri­es, appli­ca­ti­on pro­gramming inter­faces (APIs), as direct down­load, or as phy­si­cal copy. The­se models may be fur­ther modi­fi­ed or fine-tun­ed into new models. Alt­hough AI models are essen­ti­al com­pon­ents of AI systems, they do not con­sti­tu­te AI systems on their own. AI models requi­re the addi­ti­on of fur­ther com­pon­ents, such as for exam­p­le a user inter­face, to beco­me AI systems. AI models are typi­cal­ly inte­gra­ted into and form part of AI systems. This Regu­la­ti­on pro­vi­des spe­ci­fic rules for gene­ral pur­po­se AI models and for gene­ral pur­po­se AI models that pose syste­mic risks, which should app­ly also when the­se models are inte­gra­ted or form part of an AI system. It should be under­s­tood that the obli­ga­ti­ons for the pro­vi­ders of gene­ral pur­po­se AI models should app­ly once the gene­ral pur­po­se AI models are pla­ced on the mar­ket. When the pro­vi­der of a gene­ral pur­po­se AI model inte­gra­tes an own model into its own AI system that is made available on the mar­ket or put into ser­vice, that model should be con­side­red as being pla­ced on the mar­ket and, the­r­e­fo­re, the obli­ga­ti­ons in this Regu­la­ti­on for models should con­ti­n­ue to app­ly in addi­ti­on to tho­se for AI systems. The obli­ga­ti­ons fore­seen for models should in any case not app­ly when an own model is used for purely inter­nal pro­ce­s­ses that are not essen­ti­al for pro­vi­ding a pro­duct or a ser­vice to third par­ties and the rights of natu­ral per­sons are not affec­ted. Con­side­ring their poten­ti­al signi­fi­cant­ly nega­ti­ve effects, the gene­ral-pur­po­se AI models with syste­mic risk should always be sub­ject to the rele­vant obli­ga­ti­ons under this Regu­la­ti­on. The defi­ni­ti­on should not cover AI models used befo­re their pla­cing on the mar­ket for the sole pur­po­se of rese­arch, deve­lo­p­ment and pro­to­ty­p­ing acti­vi­ties. This is wit­hout pre­ju­di­ce to the obli­ga­ti­on to com­ply with this Regu­la­ti­on when, fol­lo­wing such acti­vi­ties, a model is pla­ced on the mar­ket. (60b) Whe­re­as the gene­ra­li­ty of a model could, among other cri­te­ria, also be deter­mi­ned by a num­ber of para­me­ters, models with at least a bil­li­on of para­me­ters and trai­ned with a lar­ge amount of data using self-super­vi­si­on at sca­le should be con­side­red as dis­play­ing signi­fi­cant gene­ra­li­ty and com­pe­tent­ly per­forming a wide ran­ge of distinc­ti­ve tasks. (60c) Lar­ge gene­ra­ti­ve AI models are a typi­cal exam­p­le for a gene­ral-pur­po­se AI model, given that they allow for fle­xi­ble gene­ra­ti­on of con­tent (such as in the form of text, audio, images or video) that can rea­di­ly accom­mo­da­te a wide ran­ge of distinc­ti­ve tasks. (60d) When a gene­ral-pur­po­se AI model is inte­gra­ted into or forms part of an AI system, this system should be con­side­red a gene­ral-pur­po­se AI system when, due to this inte­gra­ti­on, this system has the capa­bi­li­ty to ser­ve a varie­ty of pur­po­ses. A gene­ral-pur­po­se AI system can be used direct­ly, or it may be inte­gra­ted into other AI systems. (60e) Pro­vi­ders of gene­ral pur­po­se AI models have a par­ti­cu­lar role and respon­si­bi­li­ty in the AI value chain, as the models they pro­vi­de may form the basis for a ran­ge of down­stream systems, often pro­vi­ded by down­stream pro­vi­ders that neces­si­ta­te a good under­stan­ding of the models and their capa­bi­li­ties, both to enable the inte­gra­ti­on of such models into their pro­ducts, and to ful­fil their obli­ga­ti­ons under this or other regu­la­ti­ons. The­r­e­fo­re, pro­por­tio­na­te trans­pa­ren­cy mea­su­res should be fore­seen, inclu­ding the dra­wing up and kee­ping up to date of docu­men­ta­ti­on, and the pro­vi­si­on of infor­ma­ti­on on the gene­ral pur­po­se AI model for its usa­ge by the down­stream pro­vi­ders. Tech­ni­cal docu­men­ta­ti­on should be pre­pared and kept up to date by the gene­ral pur­po­se AI model pro­vi­der for the pur­po­se of making it available, upon request, to the AI Office and the natio­nal com­pe­tent aut­ho­ri­ties. The mini­mal set of ele­ments con­tai­ned in such docu­men­ta­ti­ons should be out­lined, respec­tively, in Annex (IXb) and Annex (IXa). The Com­mis­si­on should be enab­led to amend the Anne­xes by dele­ga­ted acts in the light of the evol­ving tech­no­lo­gi­cal deve­lo­p­ments. (60i) Soft­ware and data, inclu­ding models, released under a free and open-source licence that allo­ws them to be open­ly shared and whe­re users can free­ly access, use, modi­fy and redis­tri­bu­te them or modi­fi­ed ver­si­ons the­reof, can con­tri­bu­te to rese­arch and inno­va­ti­on in the mar­ket and can pro­vi­de signi­fi­cant growth oppor­tu­ni­ties for the Uni­on eco­no­my. Gene­ral pur­po­se AI models released under free and open-source licen­ces should be con­side­red to ensu­re high levels of trans­pa­ren­cy and open­ness if their para­me­ters, inclu­ding the weights, the infor­ma­ti­on on the model archi­tec­tu­re, and the infor­ma­ti­on on model usa­ge are made publicly available. The licence should be con­side­red free and open- source also when it allo­ws users to run, copy, dis­tri­bu­te, stu­dy, chan­ge and impro­ve soft­ware and data, inclu­ding models under the con­di­ti­on that the ori­gi­nal pro­vi­der of the model is cre­di­ted, the iden­ti­cal or com­pa­ra­ble terms of dis­tri­bu­ti­on are respec­ted. (60i+1) Free and open-source AI com­pon­ents covers the soft­ware and data, inclu­ding models and gene­ral pur­po­se AI models, tools, ser­vices or pro­ce­s­ses of an AI system. Free and open- source AI com­pon­ents can be pro­vi­ded through dif­fe­rent chan­nels, inclu­ding their deve­lo­p­ment on open repo­si­to­ries. For the pur­po­se of this Regu­la­ti­on, AI com­pon­ents that are pro­vi­ded against a pri­ce or other­wi­se mone­ti­sed, inclu­ding through the pro­vi­si­on of tech­ni­cal sup­port or other ser­vices, inclu­ding through a soft­ware plat­form, rela­ted to the AI com­po­nent, or the use of per­so­nal data for rea­sons other than exclu­si­ve­ly for impro­ving the secu­ri­ty, com­pa­ti­bi­li­ty or inter­ope­ra­bi­li­ty of the soft­ware, with the excep­ti­on of tran­sac­tions bet­ween micro enter­pri­ses, should not bene­fit from the excep­ti­ons pro­vi­ded to free and open source AI com­pon­ents. The fact of making AI com­pon­ents available through open repo­si­to­ries should not, in its­elf, con­sti­tu­te a mone­ti­sa­ti­on. (60f) The pro­vi­ders of gene­ral pur­po­se AI models that are released under a free and open source licen­se, and who­se para­me­ters, inclu­ding the weights, the infor­ma­ti­on on the model archi­tec­tu­re, and the infor­ma­ti­on on model usa­ge, are made publicly available should be sub­ject to excep­ti­ons as regards the trans­pa­ren­cy-rela­ted requi­re­ments impo­sed on gene­ral pur­po­se AI models, unless they can be con­side­red to pre­sent a syste­mic risk, in which case the cir­cum­stance that the model is trans­pa­rent and accom­pa­nied by an open source licen­se should not be con­side­red a suf­fi­ci­ent rea­son to exclude com­pli­ance with the obli­ga­ti­ons under this Regu­la­ti­on. In any case, given that the release of gene­ral pur­po­se AI models under free and open source licence does not neces­s­a­ri­ly reve­al sub­stan­ti­al infor­ma­ti­on on the data­set used for the trai­ning or fine-tuning of the model and on how ther­eby the respect of copy­right law was ensu­red, the excep­ti­on pro­vi­ded for gene­ral pur­po­se AI models from com­pli­ance with the trans­pa­ren­cy-rela­ted requi­re­ments should not con­cern the obli­ga­ti­on to pro­du­ce a sum­ma­ry about the con­tent used for model trai­ning and the obli­ga­ti­on to put in place a poli­cy to respect Uni­on copy­right law in par­ti­cu­lar to iden­ti­fy and respect the reser­va­tions of rights expres­sed pur­su­ant to Artic­le 4(3) of Direc­ti­ve (EU) 2019/790. (60i) Gene­ral pur­po­se models, in par­ti­cu­lar lar­ge gene­ra­ti­ve models, capa­ble of gene­ra­ting text, images, and other con­tent, pre­sent uni­que inno­va­ti­on oppor­tu­ni­ties but also chal­lenges to artists, aut­hors, and other crea­tors and the way their crea­ti­ve con­tent is crea­ted, dis­tri­bu­ted, used and con­su­med. The deve­lo­p­ment and trai­ning of such models requi­re access to vast amounts of text, images, vide­os, and other data. Text and data mining tech­ni­ques may be used exten­si­ve­ly in this con­text for the retrie­val and ana­ly­sis of such con­tent, which may be pro­tec­ted by copy­right and rela­ted rights. Any use of copy­right pro­tec­ted con­tent requi­res the aut­ho­rizati­on of the right­hol­der con­cer­ned unless rele­vant copy­right excep­ti­ons and limi­ta­ti­ons app­ly. Direc­ti­ve (EU) 2019/790 intro­du­ced excep­ti­ons and limi­ta­ti­ons allo­wing repro­duc­tions and extra­c­tions of works or other sub­ject mat­ter, for the pur­po­ses of text and data mining, under cer­tain con­di­ti­ons. Under the­se rules, right­hol­ders may choo­se to reser­ve their rights over their works or other sub­ject mat­ter to pre­vent text and data mining, unless this is done for the pur­po­ses of sci­en­ti­fic rese­arch. Whe­re the rights to opt out has been express­ly reser­ved in an appro­pria­te man­ner, pro­vi­ders of gene­ral-pur­po­se AI models need to obtain an aut­ho­ri­sa­ti­on from right­hol­ders if they want to car­ry out text and data mining over such works. (60j) Pro­vi­ders that place gene­ral pur­po­se AI models on the EU mar­ket should ensu­re com­pli­ance with the rele­vant obli­ga­ti­ons in this Regu­la­ti­on. For this pur­po­se, pro­vi­ders of gene­ral pur­po­se AI models should put in place a poli­cy to respect Uni­on law on copy­right and rela­ted rights, in par­ti­cu­lar to iden­ti­fy and respect the reser­va­tions of rights expres­sed by right­hol­ders pur­su­ant to Artic­le 4(3) of Direc­ti­ve (EU) 2019/790. Any pro­vi­der pla­cing a gene­ral pur­po­se AI model on the EU mar­ket should com­ply with this obli­ga­ti­on, regard­less of the juris­dic­tion in which the copy­right-rele­vant acts under­pin­ning the trai­ning of the­se gene­ral pur­po­se AI models take place. This is neces­sa­ry to ensu­re a level play­ing field among pro­vi­ders of gene­ral pur­po­se AI models whe­re no pro­vi­der should be able to gain a com­pe­ti­ti­ve advan­ta­ge in the EU mar­ket by app­ly­ing lower copy­right stan­dards than tho­se pro­vi­ded in the Uni­on. (60k) In order to increa­se trans­pa­ren­cy on the data that is used in the pre-trai­ning and trai­ning of gene­ral pur­po­se AI models, inclu­ding text and data pro­tec­ted by copy­right law, it is ade­qua­te that pro­vi­ders of such models draw up and make publicly available a suf­fi­ci­ent­ly detail­ed sum­ma­ry of the con­tent used for trai­ning the gene­ral pur­po­se model. While taking into due account the need to pro­tect trade secrets and con­fi­den­ti­al busi­ness infor­ma­ti­on, this sum­ma­ry should be gene­ral­ly com­pre­hen­si­ve in its scope instead of tech­ni­cal­ly detail­ed to faci­li­ta­te par­ties with legi­ti­ma­te inte­rests, inclu­ding copy­right hol­ders, to exer­cise and enforce their rights under Uni­on law, for exam­p­le by listing the main data coll­ec­tions or sets that went into trai­ning the model, such as lar­ge pri­va­te or public data­ba­ses or data archi­ves, and by pro­vi­ding a nar­ra­ti­ve expl­ana­ti­on about other data sources used. It is appro­pria­te for the AI Office to pro­vi­de a tem­p­la­te for the sum­ma­ry, which should be simp­le, effec­ti­ve, and allow the pro­vi­der to pro­vi­de the requi­red sum­ma­ry in nar­ra­ti­ve form. (60ka) With regard to the obli­ga­ti­ons impo­sed on pro­vi­ders of gene­ral pur­po­se AI models to put in place a poli­cy to respect Uni­on copy­right law and make publicly available a sum­ma­ry of the con­tent used for the trai­ning, the AI Office should moni­tor whe­ther the pro­vi­der has ful­fil­led tho­se obli­ga­ti­ons wit­hout veri­fy­ing or pro­ce­e­ding to a work-by-work assess­ment of the trai­ning data in terms of copy­right com­pli­ance. This Regu­la­ti­on does not affect the enforce­ment of copy­right rules as pro­vi­ded for under Uni­on law. (60g) Com­pli­ance with the obli­ga­ti­ons fore­seen for the pro­vi­ders of gene­ral pur­po­se AI models should be com­men­su­ra­te and pro­por­tio­na­te to the type of model pro­vi­der, exclu­ding the need for com­pli­ance for per­sons who deve­lop or use models for non-pro­fes­sio­nal or sci­en­ti­fic rese­arch pur­po­ses, who should nevert­hel­ess be encou­ra­ged to vol­un­t­a­ri­ly com­ply with the­se requi­re­ments. Wit­hout pre­ju­di­ce to Uni­on Copy­right law, com­pli­ance with the­se obli­ga­ti­ons should take due account of the size of the pro­vi­der and allow sim­pli­fi­ed ways of com­pli­ance for SMEs inclu­ding start-ups, that should not repre­sent an exce­s­si­ve cost and not dis­cou­ra­ge the use of such models. In case of a modi­fi­ca­ti­on or fine-tuning of a model, the obli­ga­ti­ons for pro­vi­ders should be limi­t­ed to that modi­fi­ca­ti­on or fine-tuning, for exam­p­le by com­ple­men­ting the alre­a­dy exi­sting tech­ni­cal docu­men­ta­ti­on with infor­ma­ti­on on the modi­fi­ca­ti­ons, inclu­ding new trai­ning data sources, as a means to com­ply with the value chain obli­ga­ti­ons pro­vi­ded in this Regu­la­ti­on. (60m) Gene­ral pur­po­se AI models could pose syste­mic risks which include, but are not limi­t­ed to, any actu­al or rea­son­ab­ly fore­seeable nega­ti­ve effects in rela­ti­on to major acci­dents, dis­rup­ti­ons of cri­ti­cal sec­tors and serious con­se­quen­ces to public health and safe­ty; any actu­al or rea­son­ab­ly fore­seeable nega­ti­ve effects on demo­cra­tic pro­ce­s­ses, public and eco­no­mic secu­ri­ty; the dis­se­mi­na­ti­on of ille­gal, fal­se, or dis­cri­mi­na­to­ry con­tent. Syste­mic risks should be under­s­tood to increa­se with model capa­bi­li­ties and model reach, can ari­se along the enti­re life­cy­cle of the model, and are influen­ced by con­di­ti­ons of misu­se, model relia­bi­li­ty, model fair­ness and model secu­ri­ty, the degree of auto­no­my of the model, its access to tools, novel or com­bi­ned moda­li­ties, release and dis­tri­bu­ti­on stra­te­gies, the poten­ti­al to remo­ve guar­drails and other fac­tors. In par­ti­cu­lar, inter­na­tio­nal approa­ches have so far iden­ti­fi­ed the need to devo­te atten­ti­on to risks from poten­ti­al inten­tio­nal misu­se or unin­ten­ded issues of con­trol rela­ting to ali­gnment with human intent; che­mical, bio­lo­gi­cal, radio­lo­gi­cal, and nuclear risks, such as the ways in which bar­riers to ent­ry can be lowe­red, inclu­ding for wea­pons deve­lo­p­ment, design acqui­si­ti­on, or use; offen­si­ve cyber capa­bi­li­ties, such as the ways in vul­nerabi­li­ty dis­co­very, explo­ita­ti­on, or ope­ra­tio­nal use can be enab­led; the effects of inter­ac­tion and tool use, inclu­ding for exam­p­le the capa­ci­ty to con­trol phy­si­cal systems and inter­fe­re with cri­ti­cal infras­truc­tu­re; risks from models of making copies of them­sel­ves or “self-repli­ca­ting” or trai­ning other models; the ways in which models can give rise to harmful bias and dis­cri­mi­na­ti­on with risks to indi­vi­du­als, com­mu­ni­ties or socie­ties; the faci­li­ta­ti­on of dis­in­for­ma­ti­on or har­ming pri­va­cy with thre­ats to demo­cra­tic values and human rights; risk that a par­ti­cu­lar event could lead to a chain reac­tion with con­sidera­ble nega­ti­ve effects that could affect up to an enti­re city, an enti­re domain acti­vi­ty or an enti­re com­mu­ni­ty. (60n) It is appro­pria­te to estab­lish a metho­do­lo­gy for the clas­si­fi­ca­ti­on of gene­ral pur­po­se AI models as gene­ral pur­po­se AI model with syste­mic risks. Sin­ce syste­mic risks result from par­ti­cu­lar­ly high capa­bi­li­ties, a gene­ral-pur­po­se AI models should be con­side­red to pre­sent syste­mic risks if it has high-impact capa­bi­li­ties, eva­lua­ted on the basis of appro­pria­te tech­ni­cal tools and metho­do­lo­gies, or signi­fi­cant impact on the inter­nal mar­ket due to its reach. High-impact capa­bi­li­ties in gene­ral pur­po­se AI models means capa­bi­li­ties that match or exce­ed the capa­bi­li­ties recor­ded in the most advan­ced gene­ral-pur­po­se AI models. The full ran­ge of capa­bi­li­ties in a model could be bet­ter under­s­tood after its release on the mar­ket or when users inter­act with the model. Accor­ding to the sta­te of the art at the time of ent­ry into force of this Regu­la­ti­on, the cumu­la­ti­ve amount of com­pu­te used for the trai­ning of the gene­ral pur­po­se AI model mea­su­red in floa­ting point ope­ra­ti­ons (FLOPs) is one of the rele­vant appro­xi­ma­ti­ons for model capa­bi­li­ties. The amount of com­pu­te used for trai­ning cumu­la­tes the com­pu­te used across the acti­vi­ties and methods that are inten­ded to enhan­ce the capa­bi­li­ties of the model pri­or to deployment, such as pre-trai­ning, syn­the­tic data gene­ra­ti­on and fine-tuning. The­r­e­fo­re, an initi­al thres­hold of FLOPs should be set, which, if met by a gene­ral-pur­po­se AI model, leads to a pre­sump­ti­on that the model is a gene­ral-pur­po­se AI model with syste­mic risks. This thres­hold should be adju­sted over time to reflect tech­no­lo­gi­cal and indu­stri­al chan­ges, such as algo­rith­mic impro­ve­ments or increa­sed hard­ware effi­ci­en­cy, and should be sup­ple­men­ted with bench­marks and indi­ca­tors for model capa­bi­li­ty. To inform this, the AI Office should enga­ge with the sci­en­ti­fic com­mu­ni­ty, indu­stry, civil socie­ty and other experts. Thres­holds, as well as tools and bench­marks for the assess­ment of high-impact capa­bi­li­ties, should be strong pre­dic­tors of gene­ra­li­ty, its capa­bi­li­ties and asso­cia­ted syste­mic risk of gene­ral-pur­po­se AI models, and could take into taking into account the way the model will be pla­ced on the mar­ket or the num­ber of users it may affect. To com­ple­ment this system, the­re should be a pos­si­bi­li­ty for the Com­mis­si­on to take indi­vi­du­al decis­i­ons desi­gna­ting a gene­ral-pur­po­se AI model as a gene­ral-pur­po­se AI model with syste­mic risk if it is found that such model has capa­bi­li­ties or impact equi­va­lent to tho­se cap­tu­red by the set thres­hold. This decis­i­on should be taken on the basis of an over­all assess­ment of the cri­te­ria set out in Annex YY, such as qua­li­ty or size of the trai­ning data set, num­ber of busi­ness and end users, its input and out­put moda­li­ties, its degree of auto­no­my and sca­la­bi­li­ty, or the tools it has access to. Upon a rea­so­ned request of a pro­vi­der who­se model has been desi­gna­ted as a gene­ral- pur­po­se AI model with syste­mic risk, the Com­mis­si­on should take the request into account and may deci­de to reas­sess whe­ther the gene­ral-pur­po­se AI model can still be con­side­red to pre­sent syste­mic risks. (60o) It is also neces­sa­ry to cla­ri­fy a pro­ce­du­re for the clas­si­fi­ca­ti­on of a gene­ral pur­po­se AI model with syste­mic risks. A gene­ral pur­po­se AI model that meets the appli­ca­ble thres­hold for high-impact capa­bi­li­ties should be pre­su­med to be a gene­ral pur­po­se AI models with syste­mic risk. The pro­vi­der should noti­fy the AI Office at the latest two weeks after the requi­re­ments are met or it beco­mes known that a gene­ral pur­po­se AI model will meet the requi­re­ments that lead to the pre­sump­ti­on. This is espe­ci­al­ly rele­vant in rela­ti­on to the FLOP thres­hold becau­se trai­ning of gene­ral pur­po­se AI models takes con­sidera­ble plan­ning which inclu­des the upfront allo­ca­ti­on of com­pu­te resour­ces and, the­r­e­fo­re, pro­vi­ders of gene­ral pur­po­se AI models are able to know if their model would meet the thres­hold befo­re the trai­ning is com­ple­ted. In the con­text of this noti­fi­ca­ti­on, the pro­vi­der should be able to demon­stra­te that becau­se of its spe­ci­fic cha­rac­te­ri­stics, a gene­ral pur­po­se AI model excep­tio­nal­ly does not pre­sent syste­mic risks, and that it thus should not be clas­si­fi­ed as a gene­ral pur­po­se AI model with syste­mic risks. This infor­ma­ti­on is valuable for the AI Office to anti­ci­pa­te the pla­cing on the mar­ket of gene­ral pur­po­se AI models with syste­mic risks and the pro­vi­ders can start to enga­ge with the AI Office ear­ly on. This is espe­ci­al­ly important with regard to gene­ral-pur­po­se AI models that are plan­ned to be released as open-source, given that, after open-source model release, neces­sa­ry mea­su­res to ensu­re com­pli­ance with the obli­ga­ti­ons under this Regu­la­ti­on may be more dif­fi­cult to imple­ment. (60p) If the Com­mis­si­on beco­mes awa­re of the fact that a gene­ral pur­po­se AI model meets the requi­re­ments to clas­si­fy as a gene­ral pur­po­se model with syste­mic risk, which pre­vious­ly had eit­her not been known or of which the rele­vant pro­vi­der has fai­led to noti­fy the Com­mis­si­on, the Com­mis­si­on should be empowered to desi­gna­te it so. A system of qua­li­fi­ed alerts should ensu­re that the AI Office is made awa­re by the sci­en­ti­fic panel of gene­ral-pur­po­se AI models that should pos­si­bly be clas­si­fi­ed as gene­ral pur­po­se AI models with syste­mic risk, in addi­ti­on to the moni­to­ring acti­vi­ties of the AI Office. (60q) The pro­vi­ders of gene­ral-pur­po­se AI models pre­sen­ting syste­mic risks should be sub­ject, in addi­ti­on to the obli­ga­ti­ons pro­vi­ded for pro­vi­ders of gene­ral pur­po­se AI models, to obli­ga­ti­ons aimed at iden­ti­fy­ing and miti­ga­ting tho­se risks and ensu­ring an ade­qua­te level of cyber­se­cu­ri­ty pro­tec­tion, regard­less of whe­ther it is pro­vi­ded as a stan­da­lo­ne model or embedded in an AI system or a pro­duct. To achie­ve the­se objec­ti­ves, the Regu­la­ti­on should requi­re pro­vi­ders to per­form the neces­sa­ry model eva­lua­tions, in par­ti­cu­lar pri­or to its first pla­cing on the mar­ket, inclu­ding con­duc­ting and docu­men­ting adver­sa­ri­al test­ing of models, also, as appro­pria­te, through inter­nal or inde­pen­dent exter­nal test­ing. In addi­ti­on, pro­vi­ders of gene­ral-pur­po­se AI models with syste­mic risks should con­ti­nuous­ly assess and miti­ga­te syste­mic risks, inclu­ding for exam­p­le by put­ting in place risk manage­ment poli­ci­es, such as accoun­ta­bi­li­ty and gover­nan­ce pro­ce­s­ses, imple­men­ting post-mar­ket moni­to­ring, taking appro­pria­te mea­su­res along the enti­re model’s life­cy­cle and coope­ra­ting with rele­vant actors across the AI value chain. (60r) Pro­vi­ders of gene­ral pur­po­se AI models with syste­mic risks should assess and miti­ga­te pos­si­ble syste­mic risks. If, despi­te efforts to iden­ti­fy and pre­vent risks rela­ted to a gene­ral pur­po­se AI model that may pre­sent syste­mic risks, the deve­lo­p­ment or use of the model cau­ses a serious inci­dent, the gene­ral pur­po­se AI model pro­vi­der should wit­hout undue delay keep track of the inci­dent and report any rele­vant infor­ma­ti­on and pos­si­ble cor­rec­ti­ve mea­su­res to the Com­mis­si­on and natio­nal com­pe­tent aut­ho­ri­ties. Fur­ther­mo­re, pro­vi­ders should ensu­re an ade­qua­te level of cyber­se­cu­ri­ty pro­tec­tion for the model and its phy­si­cal infras­truc­tu­re, if appro­pria­te, along the enti­re model life­cy­cle. Cyber­se­cu­ri­ty pro­tec­tion rela­ted to syste­mic risks asso­cia­ted with mali­cious use of or attacks should duly con­sider acci­den­tal model leaka­ge, unsanc­tion­ed releases, cir­cum­ven­ti­on of safe­ty mea­su­res, and defence against cyber­at­tacks, unaut­ho­ri­sed access or model theft. This pro­tec­tion could be faci­li­ta­ted by secu­ring model weights, algo­rith­ms, ser­vers, and data­sets, such as through ope­ra­tio­nal secu­ri­ty mea­su­res for infor­ma­ti­on secu­ri­ty, spe­ci­fic cyber­se­cu­ri­ty poli­ci­es, ade­qua­te tech­ni­cal and estab­lished solu­ti­ons, and cyber and phy­si­cal access con­trols, appro­pria­te to the rele­vant cir­cum­stances and the risks invol­ved. (60s) The AI Office should encou­ra­ge and faci­li­ta­te the dra­wing up, review and adap­t­ati­on of Codes of Prac­ti­ce, taking into account inter­na­tio­nal approa­ches. All pro­vi­ders of gene­ral- pur­po­se AI models could be invi­ted to par­ti­ci­pa­te. To ensu­re that the Codes of Prac­ti­ce reflect the sta­te of the art and duly take into account a diver­se set of per­spec­ti­ves, the AI Office should col­la­bo­ra­te with rele­vant natio­nal com­pe­tent aut­ho­ri­ties, and could, whe­re appro­pria­te, con­sult with civil socie­ty orga­ni­sa­ti­ons and other rele­vant stake­hol­ders and experts, inclu­ding the Sci­en­ti­fic Panel, for the dra­wing up of the Codes. Codes of Prac­ti­ce should cover obli­ga­ti­ons for pro­vi­ders of gene­ral-pur­po­se AI models and of gene­ral- pur­po­se models pre­sen­ting syste­mic risks. In addi­ti­on, as regards syste­mic risks, Codes of Prac­ti­ce should help to estab­lish a risk taxo­no­my of the type and natu­re of the syste­mic risks at Uni­on level, inclu­ding their sources. Codes of prac­ti­ce should also be focu­sed on spe­ci­fic risk assess­ment and miti­ga­ti­on mea­su­res. (60t) The Codes of Prac­ti­ce should repre­sent a cen­tral tool for the pro­per com­pli­ance with the obli­ga­ti­ons fore­seen under this Regu­la­ti­on for pro­vi­ders of gene­ral-pur­po­se AI models. Pro­vi­ders should be able to rely on Codes of Prac­ti­ce to demon­stra­te com­pli­ance with the obli­ga­ti­ons. By means of imple­men­ting acts, the Com­mis­si­on may deci­de to appro­ve a code of prac­ti­ce and give it a gene­ral vali­di­ty within the Uni­on, or, alter­na­tively, to pro­vi­de com­mon rules for the imple­men­ta­ti­on of the rele­vant obli­ga­ti­ons, if, by the time the Regu­la­ti­on beco­mes appli­ca­ble, a Code of Prac­ti­ce can­not be fina­li­sed or is not dee­med ade­qua­te by the AI Office. Once a har­mo­ni­s­ed stan­dard is published and asses­sed as sui­ta­ble to cover the rele­vant obli­ga­ti­ons by the AI Office, the com­pli­ance with a Euro­pean har­mo­ni­s­ed stan­dard should grant pro­vi­ders the pre­sump­ti­on of con­for­mi­ty. Pro­vi­ders of gene­ral pur­po­se AI models should fur­ther­mo­re be able to demon­stra­te com­pli­ance using alter­na­ti­ve ade­qua­te means, if codes of prac­ti­ce or har­mo­ni­zed stan­dards are not available, or they choo­se not to rely on tho­se. (60u) This Regu­la­ti­on regu­la­tes AI systems and models by impo­sing cer­tain requi­re­ments and obli­ga­ti­ons for rele­vant mar­ket actors that are pla­cing them on the mar­ket, put­ting into ser­vice or use in the Uni­on, ther­eby com­ple­men­ting obli­ga­ti­ons for pro­vi­ders of inter­me­dia­ry ser­vices that embed such systems or models into their ser­vices regu­la­ted by Regu­la­ti­on (EU) 2022/2065. To the ext­ent that such systems or models are embedded into desi­gna­ted very lar­ge online plat­forms or very lar­ge online search engi­nes, they are sub­ject to the risk manage­ment frame­work pro­vi­ded for in Regu­la­ti­on (EU) 2022/2065. onse­quent­ly, the cor­re­spon­ding obli­ga­ti­ons of the AI Act should be pre­su­med to be ful­fil­led, unless signi­fi­cant syste­mic risks not cover­ed by Regu­la­ti­on (EU) 2022/2065 emer­ge and are iden­ti­fi­ed in such models. Within this frame­work, pro­vi­ders of very lar­ge online plat­forms and very lar­ge search engi­nes are obli­ged to assess poten­ti­al syste­mic risks stem­ming from the design, func­tio­ning and use of their ser­vices, inclu­ding how the design of algo­rith­mic systems used in the ser­vice may con­tri­bu­te to such risks, as well as syste­mic risks stem­ming from poten­ti­al misu­s­es. Tho­se pro­vi­ders are also obli­ged to take appro­pria­te miti­ga­ting mea­su­res in obser­van­ce of fun­da­men­tal rights. (60aa) Con­side­ring the quick pace of inno­va­ti­on and the tech­no­lo­gi­cal evo­lu­ti­on of digi­tal ser­vices in scope of dif­fe­rent instru­ments of Uni­on law in par­ti­cu­lar having in mind the usa­ge and the per­cep­ti­on of their reci­pi­en­ts, the AI systems sub­ject to this Regu­la­ti­on may be pro­vi­ded as inter­me­dia­ry ser­vices or parts the­reof within the mea­ning of Regu­la­ti­on (EU) 2022/2065, which should be inter­pre­ted in a tech­no­lo­gy-neu­tral man­ner. For exam­p­le, AI systems may be used to pro­vi­de online search engi­nes, in par­ti­cu­lar, to the ext­ent that an AI system such as an online chat­bot per­forms sear­ches of, in prin­ci­ple, all web­sites, then incor­po­ra­tes the results into its exi­sting know­ledge and uses the updated know­ledge to gene­ra­te a sin­gle out­put that com­bi­nes dif­fe­rent sources of infor­ma­ti­on. (60v) Fur­ther­mo­re, obli­ga­ti­ons pla­ced on pro­vi­ders and deployers of cer­tain AI systems in this Regu­la­ti­on to enable the detec­tion and dis­clo­sure that the out­puts of tho­se systems are arti­fi­ci­al­ly gene­ra­ted or mani­pu­la­ted are par­ti­cu­lar­ly rele­vant to faci­li­ta­te the effec­ti­ve imple­men­ta­ti­on of Regu­la­ti­on (EU) 2022/2065. This applies in par­ti­cu­lar as regards the obli­ga­ti­ons of pro­vi­ders of very lar­ge online plat­forms or very lar­ge online search engi­nes to iden­ti­fy and miti­ga­te syste­mic risks that may ari­se from the dis­se­mi­na­ti­on of con­tent that has been arti­fi­ci­al­ly gene­ra­ted or mani­pu­la­ted, in par­ti­cu­lar risk of the actu­al or fore­seeable nega­ti­ve effects on demo­cra­tic pro­ce­s­ses, civic dis­cour­se and elec­to­ral pro­ce­s­ses, inclu­ding through dis­in­for­ma­ti­on. (61) Stan­dar­di­sati­on should play a key role to pro­vi­de tech­ni­cal solu­ti­ons to pro­vi­ders to ensu­re com­pli­ance with this Regu­la­ti­on, in line with the sta­te of the art, to pro­mo­te inno­va­ti­on as well as com­pe­ti­ti­ve­ness and growth in the sin­gle mar­ket. Com­pli­ance with har­mo­ni­s­ed stan­dards as defi­ned in Regu­la­ti­on (EU) No 1025/2012 of the Euro­pean Par­lia­ment and of the Council25, which are nor­mal­ly expec­ted to reflect the sta­te of the art, should be a means for pro­vi­ders to demon­stra­te con­for­mi­ty with the requi­re­ments of this Regu­la­ti­on. A balan­ced repre­sen­ta­ti­on of inte­rests invol­ving all rele­vant stake­hol­ders in the deve­lo­p­ment of stan­dards, in par­ti­cu­lar SME’s, con­su­mer orga­ni­sa­ti­ons and envi­ron­men­tal and social stake­hol­ders in accordance with Artic­le 5 and 6 of Regu­la­ti­on 1025/2012 should the­r­e­fo­re be encou­ra­ged. In order to faci­li­ta­te com­pli­ance, the stan­dar­di­sati­on requests should be issued by the Com­mis­si­on wit­hout undue delay. When pre­pa­ring the stan­dar­di­sati­on request, the Com­mis­si­on should con­sult the AI advi­so­ry Forum and the Board in order to Euro­pean Par­lia­ment and of the Coun­cil and repe­al­ing Coun­cil Decis­i­on 87/95/EEC and Decis­i­on No 1673/2006/EC of the Euro­pean Par­lia­ment and of the Coun­cil (OJ L 316, 14.11.2012, p. 12). coll­ect rele­vant exper­ti­se. Howe­ver, in the absence of rele­vant refe­ren­ces to har­mo­ni­s­ed stan­dards, the Com­mis­si­on should be able to estab­lish, via imple­men­ting acts, and after con­sul­ta­ti­on of the AI Advi­so­ry forum, com­mon spe­ci­fi­ca­ti­ons for cer­tain requi­re­ments under this Regu­la­ti­on. The com­mon spe­ci­fi­ca­ti­on should be an excep­tio­nal fall back solu­ti­on to faci­li­ta­te the provider’s obli­ga­ti­on to com­ply with the requi­re­ments of this Regu­la­ti­on, when the stan­dar­di­sati­on request has not been accept­ed by any of the Euro­pean stan­dar­di­sati­on orga­ni­sa­ti­ons, or when the rele­vant har­mo­ni­zed stan­dards insuf­fi­ci­ent­ly address fun­da­men­tal rights con­cerns, or when the har­mo­ni­s­ed stan­dards do not com­ply with the request, or when the­re are delays in the adop­ti­on of an appro­pria­te har­mo­ni­s­ed stan­dard. If such delay in the adop­ti­on of a har­mo­ni­s­ed stan­dard is due to the tech­ni­cal com­ple­xi­ty of the stan­dard in que­sti­on, this should be con­side­red by the Com­mis­si­on befo­re con­tem­pla­ting the estab­lish­ment of com­mon spe­ci­fi­ca­ti­ons. When deve­lo­ping com­mon spe­ci­fi­ca­ti­ons, the Com­mis­si­on is encou­ra­ged to coope­ra­te with inter­na­tio­nal part­ners and inter­na­tio­nal stan­dar­di­sati­on bodies. (61a) It is appro­pria­te that, wit­hout pre­ju­di­ce to the use of har­mo­ni­s­ed stan­dards and com­mon spe­ci­fi­ca­ti­ons, pro­vi­ders of high-risk AI system that has been trai­ned and tested on data reflec­ting the spe­ci­fic geo­gra­phi­cal, beha­viou­ral, con­tex­tu­al or func­tion­al set­ting within which the AI system is inten­ded to be used, should be pre­su­med to be in com­pli­ance with the respec­ti­ve mea­su­re pro­vi­ded for under the requi­re­ment on data gover­nan­ce set out in this Regu­la­ti­on. Wit­hout pre­ju­di­ce to the requi­re­ments rela­ted to robust­ness and accu­ra­cy set out in this Regu­la­ti­on, in line with Artic­le 54(3) of Regu­la­ti­on (EU) 2019/881 of the Euro­pean Par­lia­ment and of the Coun­cil, high-risk AI systems that have been cer­ti­fi­ed or for which a state­ment of con­for­mi­ty has been issued under a cyber­se­cu­ri­ty sche­me pur­su­ant to that Regu­la­ti­on and the refe­ren­ces of which have been published in the Offi­ci­al Jour­nal of the Euro­pean Uni­on should be pre­su­med to be in com­pli­ance with the cyber­se­cu­ri­ty requi­re­ment of this Regu­la­ti­on in so far as the cyber­se­cu­ri­ty cer­ti­fi­ca­te or state­ment of con­for­mi­ty or parts the­reof cover the cyber­se­cu­ri­ty requi­re­ment of this Regu­la­ti­on This remains wit­hout pre­ju­di­ce to the vol­un­t­a­ry natu­re of that cyber­se­cu­ri­ty sche­me. (62) In order to ensu­re a high level of trust­wort­hi­ness of high-risk AI systems, tho­se systems should be sub­ject to a con­for­mi­ty assess­ment pri­or to their pla­cing on the mar­ket or put­ting into ser­vice. (63) It is appro­pria­te that, in order to mini­mi­se the bur­den on ope­ra­tors and avo­id any pos­si­ble dupli­ca­ti­on, for high-risk AI systems rela­ted to pro­ducts which are cover­ed by exi­sting Uni­on har­mo­ni­sa­ti­on legis­la­ti­on fol­lo­wing the New Legis­la­ti­ve Frame­work approach, the com­pli­ance of tho­se AI systems with the requi­re­ments of this Regu­la­ti­on should be asses­sed as part of the con­for­mi­ty assess­ment alre­a­dy fore­seen under that legis­la­ti­on. The appli­ca­bi­li­ty of the requi­re­ments of this Regu­la­ti­on should thus not affect the spe­ci­fic logic, metho­do­lo­gy or gene­ral struc­tu­re of con­for­mi­ty assess­ment under the rele­vant spe­ci­fic New Legis­la­ti­ve Frame­work legis­la­ti­on. (64) Given the com­ple­xi­ty of high-risk AI systems and the risks that are asso­cia­ted to them, it is important to deve­lop an ade­qua­te system of con­for­mi­ty assess­ment pro­ce­du­re for high risk AI systems invol­ving noti­fi­ed bodies, so cal­led third par­ty con­for­mi­ty assess­ment. Howe­ver, given the cur­rent expe­ri­ence of pro­fes­sio­nal pre-mar­ket cer­ti­fiers in the field of pro­duct safe­ty and the dif­fe­rent natu­re of risks invol­ved, it is appro­pria­te to limit, at least in an initi­al pha­se of appli­ca­ti­on of this Regu­la­ti­on, the scope of appli­ca­ti­on of third-par­ty con­for­mi­ty assess­ment for high-risk AI systems other than tho­se rela­ted to pro­ducts. The­r­e­fo­re, the con­for­mi­ty assess­ment of such systems should be car­ri­ed out as a gene­ral rule by the pro­vi­der under its own respon­si­bi­li­ty, with the only excep­ti­on of AI systems inten­ded to be used for bio­me­trics. (65) In order to car­ry out third-par­ty con­for­mi­ty assess­ments when so requi­red, noti­fi­ed bodies should be noti­fi­ed under this Regu­la­ti­on by the natio­nal com­pe­tent aut­ho­ri­ties, pro­vi­ded they are com­pli­ant with a set of requi­re­ments, nota­b­ly on inde­pen­dence, com­pe­tence, absence of con­flicts of inte­rests and sui­ta­ble cyber­se­cu­ri­ty requi­re­ments. Noti­fi­ca­ti­on of tho­se bodies should be sent by natio­nal com­pe­tent aut­ho­ri­ties to the Com­mis­si­on and the other Mem­ber Sta­tes by means of the elec­tro­nic noti­fi­ca­ti­on tool deve­lo­ped and mana­ged by the Com­mis­si­on pur­su­ant to Artic­le R23 of Decis­i­on 768/2008. (65a) In line with Uni­on com­mit­ments under the World Trade Orga­nizati­on Agree­ment on Tech­ni­cal Bar­riers to Trade, it is ade­qua­te to faci­li­ta­te the mutu­al reco­gni­ti­on of con­for­mi­ty assess­ment results pro­du­ced by com­pe­tent con­for­mi­ty assess­ment bodies, inde­pen­dent of the ter­ri­to­ry in which they are estab­lished, pro­vi­ded that tho­se con­for­mi­ty assess­ment bodies estab­lished under the law of a third coun­try meet the appli­ca­ble requi­re­ments of the Regu­la­ti­on and the Uni­on has con­clu­ded an agree­ment to that ext­ent. In this con­text, the Com­mis­si­on should actively explo­re pos­si­ble inter­na­tio­nal instru­ments for that pur­po­se and in par­ti­cu­lar pur­sue the con­clu­si­on of mutu­al reco­gni­ti­on agree­ments with third count­ries. (66) In line with the com­mon­ly estab­lished noti­on of sub­stan­ti­al modi­fi­ca­ti­on for pro­ducts regu­la­ted by Uni­on har­mo­ni­sa­ti­on legis­la­ti­on, it is appro­pria­te that when­ever a chan­ge occurs which may affect the com­pli­ance of a high risk AI system with this Regu­la­ti­on (e.g. chan­ge of ope­ra­ting system or soft­ware archi­tec­tu­re), or when the inten­ded pur­po­se of the system chan­ges, that AI system should be con­side­red a new AI system which should under­go a new con­for­mi­ty assess­ment. Howe­ver, chan­ges occur­ring to the algo­rithm and the per­for­mance of AI systems which con­ti­n­ue to ‘learn’ after being pla­ced on the mar­ket or put into ser­vice (i.e. auto­ma­ti­cal­ly adap­ting how func­tions are car­ri­ed out) should not con­sti­tu­te a sub­stan­ti­al modi­fi­ca­ti­on, pro­vi­ded that tho­se chan­ges have been pre- deter­mi­ned by the pro­vi­der and asses­sed at the moment of the con­for­mi­ty assess­ment. (67) High-risk AI systems should bear the CE mar­king to indi­ca­te their con­for­mi­ty with this Regu­la­ti­on so that they can move free­ly within the inter­nal mar­ket. For high-risk AI systems embedded in a pro­duct, a phy­si­cal CE mar­king should be affi­xed, and may be com­ple­men­ted by a digi­tal CE mar­king. For high-risk AI systems only pro­vi­ded digi­tal­ly, a digi­tal CE mar­king should be used. Mem­ber Sta­tes should not crea­te unju­sti­fi­ed obs­ta­cles to the pla­cing on the mar­ket or put­ting into ser­vice of high-risk AI systems that com­ply with the requi­re­ments laid down in this Regu­la­ti­on and bear the CE mar­king. (68) Under cer­tain con­di­ti­ons, rapid avai­la­bi­li­ty of inno­va­ti­ve tech­no­lo­gies may be cru­cial for health and safe­ty of per­sons, the pro­tec­tion of the envi­ron­ment and cli­ma­te chan­ge and for socie­ty as a who­le. It is thus appro­pria­te that under excep­tio­nal rea­sons of public secu­ri­ty or pro­tec­tion of life and health of natu­ral per­sons, envi­ron­men­tal pro­tec­tion and the pro­tec­tion of key indu­stri­al and infras­truc­tu­ral assets, mar­ket sur­veil­lan­ce aut­ho­ri­ties could aut­ho­ri­se the pla­cing on the mar­ket or put­ting into ser­vice of AI systems which have not under­go­ne a con­for­mi­ty assess­ment. In a duly justi­fi­ed situa­tions as pro­vi­ded under this regu­la­ti­ons, law enforce­ment aut­ho­ri­ties or civil pro­tec­tion aut­ho­ri­ties may put a spe­ci­fic high-risk AI system into ser­vice wit­hout the aut­ho­ri­sa­ti­on of the mar­ket sur­veil­lan­ce aut­ho­ri­ty, pro­vi­ded that such aut­ho­ri­sa­ti­on is reque­sted during or after the use wit­hout undue delay. (69) In order to faci­li­ta­te the work of the Com­mis­si­on and the Mem­ber Sta­tes in the arti­fi­ci­al intel­li­gence field as well as to increa­se the trans­pa­ren­cy towards the public, pro­vi­ders of high-risk AI systems other than tho­se rela­ted to pro­ducts fal­ling within the scope of rele­vant exi­sting Uni­on har­mo­ni­sa­ti­on legis­la­ti­on, as well as pro­vi­ders who con­sider that an AI system refer­red to in annex III is by dero­ga­ti­on not high-risk, should be requi­red to regi­ster them­sel­ves and infor­ma­ti­on about their AI system in a EU data­ba­se, to be estab­lished and mana­ged by the Com­mis­si­on. Befo­re using a high-risk AI system listed in Annex III, deployers of high-risk AI systems that are public aut­ho­ri­ties, agen­ci­es or bodies, shall regi­ster them­sel­ves in such data­ba­se and sel­ect the system that they envi­sa­ge to use.. Other deployers should be entit­led to do so vol­un­t­a­ri­ly. This sec­tion of the data­ba­se should be publicly acce­s­si­ble, free of char­ge, the infor­ma­ti­on should be easi­ly navigab­le, under­stan­da­ble and machi­ne-rea­da­ble. The data­ba­se should also be user-fri­end­ly, for exam­p­le by pro­vi­ding search func­tion­a­li­ties, inclu­ding through key­words, allo­wing the gene­ral public to find rele­vant infor­ma­ti­on inclu­ded in Annex VIII and on the are­as of risk under Annex III to which the high-risk AI systems cor­re­spond. Any sub­stan­ti­al modi­fi­ca­ti­on of high-risk AI systems should also be regi­stered in the EU data­ba­se. For high risk AI systems in the area of law enforce­ment, migra­ti­on, asyl­um and bor­der con­trol manage­ment, the regi­stra­ti­on obli­ga­ti­ons should be ful­fil­led in a secu­re non-public sec­tion of the data­ba­se. Access to the secu­re non-public sec­tion should be strict­ly limi­t­ed to the Com­mis­si­on as well as to mar­ket sur­veil­lan­ce aut­ho­ri­ties with regard to their natio­nal sec­tion of that data­ba­se. High risk AI systems in the area of cri­ti­cal infras­truc­tu­re should only be regi­stered at natio­nal level. The Com­mis­si­on should be the con­trol­ler of the EU data­ba­se, in accordance with Regu­la­ti­on (EU) 2018/1725 of the Euro­pean Par­lia­ment and of the Council26. In order to ensu­re the full func­tion­a­li­ty of the data­ba­se, when deployed, the pro­ce­du­re for set­ting the data­ba­se should include the ela­bo­ra­ti­on of func­tion­al spe­ci­fi­ca­ti­ons by the Com­mis­si­on and an inde­pen­dent audit report. The Com­mis­si­on should take into account cyber­se­cu­ri­ty and hazard-rela­ted risks when car­ry­ing out its tasks as data con­trol­ler on the EU data­ba­se. In order to maxi­mi­se the avai­la­bi­li­ty and use of the data­ba­se by the public, the data­ba­se, inclu­ding the infor­ma­ti­on made available through it, should com­ply with requi­re­ments under the Direc­ti­ve 2019/882. (70) Cer­tain AI systems inten­ded to inter­act with natu­ral per­sons or to gene­ra­te con­tent may pose spe­ci­fic risks of imper­so­na­ti­on or decep­ti­on irre­spec­ti­ve of whe­ther they qua­li­fy as high-risk or not. In cer­tain cir­cum­stances, the use of the­se systems should the­r­e­fo­re be sub­ject to spe­ci­fic trans­pa­ren­cy obli­ga­ti­ons wit­hout pre­ju­di­ce to the requi­re­ments and obli­ga­ti­ons for high-risk AI systems and sub­ject to tar­ge­ted excep­ti­ons to take into account the spe­cial need of law enforce­ment. In par­ti­cu­lar, natu­ral per­sons should be noti­fi­ed that they are inter­ac­ting with an AI system, unless this is obvious from the point of view of a natu­ral per­son who is rea­son­ab­ly well-infor­med, obser­vant and cir­cum­spect taking into account the cir­cum­stances and the con­text of use. When imple­men­ting such obli­ga­ti­on, the cha­rac­te­ri­stics of indi­vi­du­als belon­ging to vul­nerable groups due to their age or disa­bi­li­ty should be taken into account to the ext­ent the AI system is inten­ded to inter­act with tho­se groups as well. Moreo­ver, natu­ral per­sons should be noti­fi­ed when they are expo­sed to systems that, by pro­ce­s­sing their bio­me­tric data, can iden­ti­fy or infer the emo­ti­ons or inten­ti­ons of tho­se per­sons or assign them to spe­ci­fic cate­go­ries. Such spe­ci­fic cate­go­ries can rela­te to aspects such as sex, age, hair colour, eye colour, tat­toos, per­so­nal traits, eth­nic ori­gin, per­so­nal pre­fe­ren­ces and inte­rests. Such infor­ma­ti­on and noti­fi­ca­ti­ons should be pro­vi­ded in acce­s­si­ble for­mats for per­sons with disa­bi­li­ties. (70a) A varie­ty of AI systems can gene­ra­te lar­ge quan­ti­ties of syn­the­tic con­tent that beco­mes incre­a­sing­ly hard for humans to distin­gu­ish from human-gene­ra­ted and authen­tic con­tent. The wide avai­la­bi­li­ty and incre­a­sing capa­bi­li­ties of tho­se systems have a signi­fi­cant impact on the inte­gri­ty and trust in the infor­ma­ti­on eco­sy­stem, rai­sing new risks of mis­in­for­ma­ti­on and mani­pu­la­ti­on at sca­le, fraud, imper­so­na­ti­on and con­su­mer decep­ti­on. In the light of tho­se impacts, the fast tech­no­lo­gi­cal pace and the need for new methods and tech­ni­ques to trace ori­gin of infor­ma­ti­on, it is appro­pria­te to requi­re pro­vi­ders of tho­se systems to embed tech­ni­cal solu­ti­ons that enable mar­king in a machi­ne rea­da­ble for­mat and detec­tion that the out­put has been gene­ra­ted or mani­pu­la­ted by an AI system and not a human. Such tech­ni­ques and methods should be suf­fi­ci­ent­ly relia­ble, inter­ope­ra­ble, effec­ti­ve and robust as far as this is tech­ni­cal­ly fea­si­ble, taking into account available tech­ni­ques or a com­bi­na­ti­on of such tech­ni­ques, such as water­marks, meta­da­ta iden­ti­fi­ca­ti­ons, cryp­to­gra­phic methods for pro­ving pro­ven­an­ce and authen­ti­ci­ty of con­tent, log­ging methods, fin­ger­prints or other tech­ni­ques, as may be appro­pria­te. When imple­men­ting this obli­ga­ti­on, pro­vi­ders should also take into account the spe­ci­fi­ci­ties and the limi­ta­ti­ons of the dif­fe­rent types of con­tent and the rele­vant tech­no­lo­gi­cal and mar­ket deve­lo­p­ments in the field, as reflec­ted in the gene­ral­ly ack­now­led­ged sta­te-of-the-art. Such tech­ni­ques and methods can be imple­men­ted at the level of the system or at the level of the model, inclu­ding gene­ral-pur­po­se AI models gene­ra­ting con­tent, ther­eby faci­li­ta­ting ful­film­ent of this obli­ga­ti­on by the down­stream pro­vi­der of the AI system. To remain pro­por­tio­na­te, it is appro­pria­te to envi­sa­ge that this mar­king obli­ga­ti­on should not cover AI systems per­forming pri­ma­ri­ly an assi­sti­ve func­tion for stan­dard editing or AI systems not sub­stan­ti­al­ly alte­ring the input data pro­vi­ded by the deployer or the seman­tics the­reof. (70b) Fur­ther to the tech­ni­cal solu­ti­ons employed by the pro­vi­ders of the system, deployers , who use an AI system to gene­ra­te or mani­pu­la­te image, audio or video con­tent that app­re­cia­bly resem­bles exi­sting per­sons, places or events and would fal­se­ly appear to a per­son to be authen­tic (‘deep fakes’), should also cle­ar­ly and distin­gu­is­ha­b­ly dis­c­lo­se that the con­tent has been arti­fi­ci­al­ly crea­ted or mani­pu­la­ted by label­ling the arti­fi­ci­al intel­li­gence out­put accor­din­gly and dis­clo­sing its arti­fi­ci­al ori­gin The com­pli­ance with this trans­pa­ren­cy obli­ga­ti­on should not be inter­pre­ted as indi­ca­ting that the use of the system or its out­put impe­des the right to free­dom of expres­si­on and the right to free­dom of the arts and sci­en­ces gua­ran­teed in the Char­ter of Fun­da­men­tal Rights of the EU, in par­ti­cu­lar whe­re the con­tent is part of an evi­dent­ly crea­ti­ve, sati­ri­cal, artis­tic or fic­tion­al work or pro­gram­me, sub­ject to appro­pria­te safe­guards for the rights and free­doms of third par­ties. In tho­se cases, the trans­pa­ren­cy obli­ga­ti­on for deep fakes set out in this Regu­la­ti­on is limi­t­ed to dis­clo­sure of the exi­stence of such gene­ra­ted or mani­pu­la­ted con­tent in an appro­pria­te man­ner that does not ham­per the dis­play or enjoy­ment of the work, inclu­ding its nor­mal explo­ita­ti­on and use, while main­tai­ning the uti­li­ty and qua­li­ty of the work. In addi­ti­on, it is also appro­pria­te to envi­sa­ge a simi­lar dis­clo­sure obli­ga­ti­on in rela­ti­on to AI-gene­ra­ted or mani­pu­la­ted text to the ext­ent it is published with the pur­po­se of informing the public on mat­ters of public inte­rest unless the AI-gene­ra­ted con­tent has under­go­ne a pro­cess of human review or edi­to­ri­al con­trol and a natu­ral or legal per­son holds edi­to­ri­al respon­si­bi­li­ty for the publi­ca­ti­on of the con­tent. (70c) To ensu­re con­si­stent imple­men­ta­ti­on, it is appro­pria­te to empower the Com­mis­si­on to adopt imple­men­ting acts on the appli­ca­ti­on of the pro­vi­si­ons on the label­ling and detec­tion of arti­fi­ci­al­ly gene­ra­ted or mani­pu­la­ted con­tent. Wit­hout pre­ju­di­ce to the man­da­to­ry natu­re and full appli­ca­bi­li­ty of the­se obli­ga­ti­ons, the Com­mis­si­on may also encou­ra­ge and faci­li­ta­te the dra­wing up of codes of prac­ti­ce at Uni­on level to faci­li­ta­te the effec­ti­ve imple­men­ta­ti­on of the obli­ga­ti­ons regar­ding the detec­tion and label­ling of arti­fi­ci­al­ly gene­ra­ted or mani­pu­la­ted con­tent, inclu­ding to sup­port prac­ti­cal arran­ge­ments for making, as appro­pria­te, the detec­tion mecha­nisms acce­s­si­ble and faci­li­ta­ting coope­ra­ti­on with other actors in the value chain, dis­se­mi­na­ting con­tent or checking its authen­ti­ci­ty and pro­ven­an­ce to enable the public to effec­tively distin­gu­ish AI-gene­ra­ted con­tent. (70d) The obli­ga­ti­ons pla­ced on pro­vi­ders and deployers of cer­tain AI systems in this Regu­la­ti­on to enable the detec­tion and dis­clo­sure that the out­puts of tho­se systems are arti­fi­ci­al­ly gene­ra­ted or mani­pu­la­ted are par­ti­cu­lar­ly rele­vant to faci­li­ta­te the effec­ti­ve imple­men­ta­ti­on of Regu­la­ti­on (EU) 2022/2065. This applies in par­ti­cu­lar as regards the obli­ga­ti­ons of pro­vi­ders of very lar­ge online plat­forms or very lar­ge online search engi­nes to iden­ti­fy and miti­ga­te syste­mic risks that may ari­se from the dis­se­mi­na­ti­on of con­tent that has been arti­fi­ci­al­ly gene­ra­ted or mani­pu­la­ted, in par­ti­cu­lar risk of the actu­al or fore­seeable nega­ti­ve effects on demo­cra­tic pro­ce­s­ses, civic dis­cour­se and elec­to­ral pro­ce­s­ses, inclu­ding through dis­in­for­ma­ti­on. The requi­re­ment to label con­tent gene­ra­ted by AI systems under this Regu­la­ti­on is wit­hout pre­ju­di­ce to the obli­ga­ti­on in Artic­le 16(6) of Regu­la­ti­on 2022/2065 for pro­vi­ders of hosting ser­vices to pro­cess noti­ces on ille­gal con­tent recei­ved pur­su­ant to Artic­le 16(1) and should not influence the assess­ment and the decis­i­on on the ille­ga­li­ty of the spe­ci­fic con­tent. That assess­ment should be per­for­med sole­ly with refe­rence to the rules gover­ning the lega­li­ty of the con­tent. (70e) The com­pli­ance with the trans­pa­ren­cy obli­ga­ti­ons for the AI systems coved by this Regu­la­ti­on should not be inter­pre­ted as indi­ca­ting that the use of the system or its out­put is lawful under this Regu­la­ti­on or other Uni­on and Mem­ber Sta­te law and should be wit­hout pre­ju­di­ce to other trans­pa­ren­cy obli­ga­ti­ons for deployers of AI systems laid down in Uni­on or natio­nal law. (71) Arti­fi­ci­al intel­li­gence is a rapid­ly deve­lo­ping fami­ly of tech­no­lo­gies that requi­res regu­la­to­ry over­sight and a safe and con­trol­led space for expe­ri­men­ta­ti­on, while ensu­ring respon­si­ble inno­va­ti­on and inte­gra­ti­on of appro­pria­te safe­guards and risk miti­ga­ti­on mea­su­res. To ensu­re a legal frame­work that pro­mo­tes inno­va­ti­on, is future-pro­of and resi­li­ent to dis­rup­ti­on, Mem­ber Sta­tes should ensu­re that their natio­nal com­pe­tent aut­ho­ri­ties estab­lish at least one arti­fi­ci­al intel­li­gence regu­la­to­ry sand­box at natio­nal level to faci­li­ta­te the deve­lo­p­ment and test­ing of inno­va­ti­ve AI systems under strict regu­la­to­ry over­sight befo­re the­se systems are pla­ced on the mar­ket or other­wi­se put into ser­vice. Mem­ber Sta­tes could also ful­fil this obli­ga­ti­on through par­ti­ci­pa­ting in alre­a­dy exi­sting regu­la­to­ry sand­bo­xes or estab­li­shing joint­ly a sand­box with one or seve­ral Mem­ber Sta­tes’ com­pe­tent aut­ho­ri­ties, inso­far as this par­ti­ci­pa­ti­on pro­vi­des equi­va­lent level of natio­nal covera­ge for the par­ti­ci­pa­ting Mem­ber Sta­tes. Regu­la­to­ry sand­bo­xes could be estab­lished in phy­si­cal, digi­tal or hybrid form and may accom­mo­da­te phy­si­cal as well as digi­tal pro­ducts. Estab­li­shing aut­ho­ri­ties should also ensu­re that the regu­la­to­ry sand­bo­xes have the ade­qua­te resour­ces for their func­tio­ning, inclu­ding finan­cial and human resour­ces. (72) The objec­ti­ves of the AI regu­la­to­ry sand­bo­xes should be to foster AI inno­va­ti­on by estab­li­shing a con­trol­led expe­ri­men­ta­ti­on and test­ing envi­ron­ment in the deve­lo­p­ment and pre-mar­ke­ting pha­se with a view to ensu­ring com­pli­ance of the inno­va­ti­ve AI systems with this Regu­la­ti­on and other rele­vant Uni­on and Mem­ber Sta­tes legis­la­ti­on, to enhan­ce legal cer­tain­ty for inno­va­tors and the com­pe­tent aut­ho­ri­ties’ over­sight and under­stan­ding of the oppor­tu­ni­ties, emer­ging risks and the impacts of AI use, to faci­li­ta­te regu­la­to­ry lear­ning for aut­ho­ri­ties and com­pa­nies, inclu­ding with a view to future adap­ti­ons of the legal frame­work, to sup­port coope­ra­ti­on and the sha­ring of best prac­ti­ces with the aut­ho­ri­ties invol­ved in the AI regu­la­to­ry sand­box, and to acce­le­ra­te access to mar­kets, inclu­ding by remo­ving bar­riers for small and medi­um enter­pri­ses (SMEs), inclu­ding start-ups. Regu­la­to­ry sand­bo­xes should be wide­ly available throug­hout the Uni­on, and par­ti­cu­lar atten­ti­on should be given to their acce­s­si­bi­li­ty for SMEs, inclu­ding start­ups. The par­ti­ci­pa­ti­on in the AI regu­la­to­ry sand­box should focus on issues that rai­se legal uncer­tain­ty for pro­vi­ders and pro­s­pec­ti­ve pro­vi­ders to inno­va­te, expe­ri­ment with AI in the Uni­on and con­tri­bu­te to evi­dence-based regu­la­to­ry lear­ning. The super­vi­si­on of the AI systems in the AI regu­la­to­ry sand­box should the­r­e­fo­re cover their deve­lo­p­ment, trai­ning, test­ing and vali­da­ti­on befo­re the systems are pla­ced on the mar­ket or put into ser­vice, as well as the noti­on and occur­rence of sub­stan­ti­al modi­fi­ca­ti­on that may requi­re a new con­for­mi­ty assess­ment pro­ce­du­re. Any signi­fi­cant risks iden­ti­fi­ed during the deve­lo­p­ment and test­ing of such AI systems should result in ade­qua­te miti­ga­ti­on and, fai­ling that, in the sus­pen­si­on of the deve­lo­p­ment and test­ing pro­cess Whe­re appro­pria­te, natio­nal com­pe­tent aut­ho­ri­ties estab­li­shing AI regu­la­to­ry sand­bo­xes should coope­ra­te with other rele­vant aut­ho­ri­ties, inclu­ding tho­se super­vi­sing the pro­tec­tion of fun­da­men­tal rights„ and could allow for the invol­vement of other actors within the AI eco­sy­stem such as natio­nal or Euro­pean stan­dar­di­sati­on orga­ni­sa­ti­ons, noti­fi­ed bodies, test­ing and expe­ri­men­ta­ti­on faci­li­ties, rese­arch and expe­ri­men­ta­ti­on labs, Euro­pean Digi­tal inno­va­ti­on hubs and rele­vant stake­hol­der and civil socie­ty orga­ni­sa­ti­ons. To ensu­re uni­form imple­men­ta­ti­on across the Uni­on and eco­no­mies of sca­le, it is appro­pria­te to estab­lish com­mon rules for the regu­la­to­ry sand­bo­xes’ imple­men­ta­ti­on and a frame­work for coope­ra­ti­on bet­ween the rele­vant aut­ho­ri­ties invol­ved in the super­vi­si­on of the sand­bo­xes. AI regu­la­to­ry sand­bo­xes estab­lished under this Regu­la­ti­on should be wit­hout pre­ju­di­ce to other legis­la­ti­on allo­wing for the estab­lish­ment of other sand­bo­xes aiming at ensu­ring com­pli­ance with legis­la­ti­on other that this Regu­la­ti­on. Whe­re appro­pria­te, rele­vant com­pe­tent aut­ho­ri­ties in char­ge of tho­se other regu­la­to­ry sand­bo­xes should con­sider the bene­fits of using tho­se sand­bo­xes also for the pur­po­se of ensu­ring com­pli­ance of AI systems with this Regu­la­ti­on. Upon agree­ment bet­ween the natio­nal com­pe­tent aut­ho­ri­ties and the par­ti­ci­pan­ts in the AI regu­la­to­ry sand­box, test­ing in real world con­di­ti­ons may also be ope­ra­ted and super­vi­sed in the frame­work of the AI regu­la­to­ry sand­box. (72a) This Regu­la­ti­on should pro­vi­de the legal basis for the pro­vi­ders and pro­s­pec­ti­ve pro­vi­ders in the AI regu­la­to­ry sand­box to use per­so­nal data coll­ec­ted for other pur­po­ses for deve­lo­ping cer­tain AI systems in the public inte­rest within the AI regu­la­to­ry sand­box, only under spe­ci­fi­ed con­di­ti­ons, in line with Artic­le 6(4) and 9(2)(g) of Regu­la­ti­on (EU) 2016/679, and Artic­le 5, 6 and 10 of Regu­la­ti­on (EU) 2018/1725, and wit­hout pre­ju­di­ce to Artic­les 4(2) and 10 of Direc­ti­ve (EU) 2016/680. All other obli­ga­ti­ons of data con­trol­lers and rights of data sub­jects under Regu­la­ti­on (EU) 2016/679, Regu­la­ti­on (EU) 2018/1725 and Direc­ti­ve (EU) 2016/680 remain appli­ca­ble. In par­ti­cu­lar, this Regu­la­ti­on should not pro­vi­de a legal basis in the mea­ning of Artic­le 22(2)(b) of Regu­la­ti­on (EU) 2016/679 and Artic­le 24(2)(b) of Regu­la­ti­on (EU) 2018/1725. Pro­vi­ders and pro­s­pec­ti­ve pro­vi­ders in the sand­box should ensu­re appro­pria­te safe­guards and coope­ra­te with the com­pe­tent aut­ho­ri­ties, inclu­ding by fol­lo­wing their gui­dance and acting expe­di­tious­ly and in good faith to ade­qua­te­ly miti­ga­te any iden­ti­fi­ed – signi­fi­cant risks to safe­ty, health, and fun­da­men­tal rights that may ari­se during the deve­lo­p­ment, test­ing and expe­ri­men­ta­ti­on in the sand­box. (72b) In order to acce­le­ra­te the pro­cess of deve­lo­p­ment and pla­cing on the mar­ket of high-risk AI systems listed in Annex III, it is important that pro­vi­ders or pro­s­pec­ti­ve pro­vi­ders of such systems may also bene­fit from a spe­ci­fic regime for test­ing tho­se systems in real world con­di­ti­ons, wit­hout par­ti­ci­pa­ting in an AI regu­la­to­ry sand­box. Howe­ver, in such cases and taking into account the pos­si­ble con­se­quen­ces of such test­ing on indi­vi­du­als, it should be ensu­red that appro­pria­te and suf­fi­ci­ent gua­ran­tees and con­di­ti­ons are intro­du­ced by the Regu­la­ti­on for pro­vi­ders or pro­s­pec­ti­ve pro­vi­ders. Such gua­ran­tees should include, among others, reque­st­ing infor­med con­sent of natu­ral per­sons to par­ti­ci­pa­te in test­ing in real world con­di­ti­ons, with the excep­ti­on of law enforce­ment in cases whe­re the see­king of infor­med con­sent would pre­vent the AI system from being tested. Con­sent of sub­jects to par­ti­ci­pa­te in such test­ing under this Regu­la­ti­on is distinct from and wit­hout pre­ju­di­ce to con­sent of data sub­jects for the pro­ce­s­sing of their per­so­nal data under the rele­vant data pro­tec­tion law. It is also important to mini­mi­se the risks and enable over­sight by com­pe­tent aut­ho­ri­ties and the­r­e­fo­re requi­re pro­s­pec­ti­ve pro­vi­ders to have a real-world test­ing plan sub­mit­ted to com­pe­tent mar­ket sur­veil­lan­ce aut­ho­ri­ty, regi­ster the test­ing in dedi­ca­ted sec­tions in the EU-wide data­ba­se sub­ject to some limi­t­ed excep­ti­ons, set limi­ta­ti­ons on the peri­od for which the test­ing can be done and requi­re addi­tio­nal safe­guards for per­sons belon­ging to cer­tain vul­nerable groups as well as a writ­ten agree­ment defi­ning the roles and respon­si­bi­li­ties of pro­s­pec­ti­ve pro­vi­ders and deployers and effec­ti­ve over­sight by com­pe­tent per­son­nel invol­ved in the real world test­ing. Fur­ther­mo­re, it is appro­pria­te to envi­sa­ge addi­tio­nal safe­guards to ensu­re that the pre­dic­tions, recom­men­da­ti­ons or decis­i­ons of the AI system can be effec­tively rever­sed and dis­re­gard­ed and that per­so­nal data is pro­tec­ted and is dele­ted when the sub­jects have with­drawn their con­sent to par­ti­ci­pa­te in the test­ing wit­hout pre­ju­di­ce to their rights as data sub­jects under the EU data pro­tec­tion law. As regards trans­fer of data, it is also appro­pria­te to envi­sa­ge that data coll­ec­ted and pro­ce­s­sed for the pur­po­se of the test­ing in real world con­di­ti­ons should only be trans­fer­red to third count­ries out­side the Uni­on pro­vi­ded appro­pria­te and appli­ca­ble safe­guards under Uni­on law are imple­men­ted, nota­b­ly in accordance with bases for trans­fer of per­so­nal data under Uni­on law on data pro­tec­tion, while for non-per­so­nal data appro­pria­te safe­guards are put in place in accordance with Uni­on law, such as the Data Gover­nan­ce Act and the Data Act. (72c) To ensu­re that Arti­fi­ci­al Intel­li­gence leads to soci­al­ly and envi­ron­men­tal­ly bene­fi­ci­al out­co­mes, Mem­ber Sta­tes are encou­ra­ged to sup­port and pro­mo­te rese­arch and deve­lo­p­ment of AI solu­ti­ons in sup­port of soci­al­ly and envi­ron­men­tal­ly bene­fi­ci­al out­co­mes, such as AI-based solu­ti­ons to increa­se acce­s­si­bi­li­ty for per­sons with disa­bi­li­ties, tack­le socio-eco­no­mic ine­qua­li­ties, or meet envi­ron­men­tal tar­gets, by allo­ca­ting suf­fi­ci­ent resour­ces, inclu­ding public and Uni­on fun­ding, and, whe­re appro­pria­te and pro­vi­ded that the eli­gi­bi­li­ty and sel­ec­tion cri­te­ria are ful­fil­led, con­side­ring in par­ti­cu­lar pro­jects which pur­sue such objec­ti­ves. Such pro­jects should be based on the prin­ci­ple of inter­di­sci­pli­na­ry coope­ra­ti­on bet­ween AI deve­lo­pers, experts on ine­qua­li­ty and non- dis­cri­mi­na­ti­on, acce­s­si­bi­li­ty, con­su­mer, envi­ron­men­tal, and digi­tal rights, as well as aca­de­mics. (73) In order to pro­mo­te and pro­tect inno­va­ti­on, it is important that the inte­rests of SMEs, inclu­ding start-ups, that are pro­vi­ders or deployers of AI systems are taken into par­ti­cu­lar account. To this objec­ti­ve, Mem­ber Sta­tes should deve­lop initia­ti­ves, which are tar­ge­ted at tho­se ope­ra­tors, inclu­ding on, awa­re­ness rai­sing and infor­ma­ti­on com­mu­ni­ca­ti­on. Mem­ber Sta­tes shall pro­vi­de SME’s, inclu­ding start-ups, having a regi­stered office or a branch in the Uni­on, with prio­ri­ty access to the AI regu­la­to­ry sand­bo­xes pro­vi­ded that they ful­fil the eli­gi­bi­li­ty con­di­ti­ons and sel­ec­tion cri­te­ria and wit­hout pre­clu­ding other pro­vi­ders and pro­s­pec­ti­ve pro­vi­ders to access the sand­bo­xes pro­vi­ded the same con­di­ti­ons and cri­te­ria are ful­fil­led. Mem­ber Sta­tes shall uti­li­se exi­sting chan­nels and whe­re appro­pria­te, estab­lish new dedi­ca­ted chan­nels for com­mu­ni­ca­ti­on with SMEs, start-ups, deployers other inno­va­tors and, as appro­pria­te, local public aut­ho­ri­ties, to sup­port SMEs throug­hout their deve­lo­p­ment path by pro­vi­ding gui­dance and respon­ding to queries about the imple­men­ta­ti­on of this Regu­la­ti­on. Whe­re appro­pria­te, the­se chan­nels shall work tog­e­ther to crea­te syn­er­gies and ensu­re homo­gen­ei­ty in their gui­dance to SMEs inclu­ding start-ups and deployers. Addi­tio­nal­ly, Mem­ber Sta­tes should faci­li­ta­te the par­ti­ci­pa­ti­on of SMEs and other rele­vant stake­hol­ders in the stan­dar­di­sati­on deve­lo­p­ment pro­ce­s­ses. Moreo­ver, the spe­ci­fic inte­rests and needs of SMEs inclu­ding start-up pro­vi­ders should be taken into account when Noti­fi­ed Bodies set con­for­mi­ty assess­ment fees. The Com­mis­si­on should regu­lar­ly assess the cer­ti­fi­ca­ti­on and com­pli­ance costs for SMEs inclu­ding start-ups, through trans­pa­rent con­sul­ta­ti­ons deployers and should work with Mem­ber Sta­tes to lower such costs. For exam­p­le, trans­la­ti­on costs rela­ted to man­da­to­ry docu­men­ta­ti­on and com­mu­ni­ca­ti­on with aut­ho­ri­ties may con­sti­tu­te a signi­fi­cant cost for pro­vi­ders and other ope­ra­tors, nota­b­ly tho­se of a smal­ler sca­le. Mem­ber Sta­tes should pos­si­bly ensu­re that one of the lan­guages deter­mi­ned and accept­ed by them for rele­vant pro­vi­ders’ docu­men­ta­ti­on and for com­mu­ni­ca­ti­on with ope­ra­tors is one which is broad­ly under­s­tood by the lar­gest pos­si­ble num­ber of cross-bor­der deployers. In order to address the spe­ci­fic needs of SMEs inclu­ding start-ups, the Com­mis­si­on should pro­vi­de stan­dar­di­sed tem­pla­tes for the are­as cover­ed by this Regu­la­ti­on upon request of the AI Board. Addi­tio­nal­ly, the Com­mis­si­on should com­ple­ment Mem­ber Sta­tes’ efforts by pro­vi­ding a sin­gle infor­ma­ti­on plat­form with easy-to-use infor­ma­ti­on with regards to this Regu­la­ti­on for all pro­vi­ders and deployers, by orga­ni­s­ing appro­pria­te com­mu­ni­ca­ti­on cam­paigns to rai­se awa­re­ness about the obli­ga­ti­ons ari­sing from this Regu­la­ti­on, and by eva­lua­ting and pro­mo­ting the con­ver­gence of best prac­ti­ces in public pro­cu­re­ment pro­ce­du­res in rela­ti­on to AI systems. Medi­um-sized enter­pri­ses which recent­ly chan­ged from the small to medi­um-size cate­go­ry within the mea­ning of the Annex to Recom­men­da­ti­on 2003/361/EC (Artic­le 16) should have access to the­se sup­port mea­su­res, as the­se new medi­um-sized enter­pri­ses may some­ti­mes lack the legal resour­ces and trai­ning neces­sa­ry to ensu­re pro­per under­stan­ding and com­pli­ance with pro­vi­si­ons. (73a) In order to pro­mo­te and pro­tect inno­va­ti­on, the AI-on demand plat­form, all rele­vant EU fun­ding pro­gram­mes and pro­jects, such as Digi­tal Euro­pe Pro­gram­me, Hori­zon Euro­pe, imple­men­ted by the Com­mis­si­on and the Mem­ber Sta­tes at natio­nal or Uni­on level should, as appro­pria­te, con­tri­bu­te to the achie­ve­ment of the objec­ti­ves of this Regu­la­ti­on. (74) In par­ti­cu­lar, in order to mini­mi­se the risks to imple­men­ta­ti­on resul­ting from lack of know­ledge and exper­ti­se in the mar­ket as well as to faci­li­ta­te com­pli­ance of pro­vi­ders, nota­b­ly SMEs, inclu­ding start-ups, and noti­fi­ed bodies with their obli­ga­ti­ons under this Regu­la­ti­on, the AI-on demand plat­form, the Euro­pean Digi­tal Inno­va­ti­on Hubs and the Test­ing and Expe­ri­men­ta­ti­on Faci­li­ties estab­lished by the Com­mis­si­on and the Mem­ber Sta­tes at natio­nal or EU level should con­tri­bu­te to the imple­men­ta­ti­on of this Regu­la­ti­on. Within their respec­ti­ve mis­si­on and fields of com­pe­tence, they may pro­vi­de in par­ti­cu­lar tech­ni­cal and sci­en­ti­fic sup­port to pro­vi­ders and noti­fi­ed bodies. (74a) Moreo­ver, in order to ensu­re pro­por­tio­na­li­ty con­side­ring the very small size of some ope­ra­tors regar­ding costs of inno­va­ti­on, it is appro­pria­te to allow microen­ter­pri­ses to ful­fil one of the most cost­ly obli­ga­ti­ons, name­ly to estab­lish a qua­li­ty manage­ment system, in a sim­pli­fi­ed man­ner which would redu­ce the admi­ni­stra­ti­ve bur­den and the costs for tho­se enter­pri­ses wit­hout affec­ting the level of pro­tec­tion and the need for com­pli­ance with the requi­re­ments for high-risk AI systems. The Com­mis­si­on should deve­lop gui­de­lines to spe­ci­fy the ele­ments of the qua­li­ty manage­ment system to be ful­fil­led in this sim­pli­fi­ed man­ner by microen­ter­pri­ses. (75) It is appro­pria­te that the Com­mis­si­on faci­li­ta­tes, to the ext­ent pos­si­ble, access to Test­ing and Expe­ri­men­ta­ti­on Faci­li­ties to bodies, groups or labo­ra­to­ries estab­lished or accre­di­ted pur­su­ant to any rele­vant Uni­on har­mo­ni­sa­ti­on legis­la­ti­on and which ful­fil tasks in the con­text of con­for­mi­ty assess­ment of pro­ducts or devices cover­ed by that Uni­on har­mo­ni­sa­ti­on legis­la­ti­on. This is nota­b­ly the case for expert panels, expert labo­ra­to­ries and refe­rence labo­ra­to­ries in the field of medi­cal devices pur­su­ant to Regu­la­ti­on (EU) 2017/745 and Regu­la­ti­on (EU) 2017/746. (75a) This Regu­la­ti­on should estab­lish a gover­nan­ce frame­work that both allo­ws to coor­di­na­te and sup­port the appli­ca­ti­on of this Regu­la­ti­on at natio­nal level, as well as build capa­bi­li­ties at Uni­on level and inte­gra­te stake­hol­ders in the field of arti­fi­ci­al intel­li­gence. The effec­ti­ve imple­men­ta­ti­on and enforce­ment of this Regu­la­ti­on requi­re a gover­nan­ce frame­work that allo­ws to coor­di­na­te and build up cen­tral exper­ti­se at Uni­on level. The Com­mis­si­on has estab­lished the AI Office by Com­mis­si­on decis­i­on of […], which has as its mis­si­on to deve­lop Uni­on exper­ti­se and capa­bi­li­ties in the field of arti­fi­ci­al intel­li­gence and to con­tri­bu­te to the imple­men­ta­ti­on of Uni­on legis­la­ti­on on arti­fi­ci­al intel­li­gence. Mem­ber Sta­tes should faci­li­ta­te the tasks of the AI Office with a view to sup­port the deve­lo­p­ment of Uni­on exper­ti­se and capa­bi­li­ties at Uni­on level and to streng­then the func­tio­ning of the digi­tal sin­gle mar­ket. Fur­ther­mo­re, a Euro­pean Arti­fi­ci­al Intel­li­gence Board com­po­sed of repre­sen­ta­ti­ves of the Mem­ber Sta­tes, a sci­en­ti­fic panel to inte­gra­te the sci­en­ti­fic com­mu­ni­ty and an advi­so­ry forum to con­tri­bu­te stake­hol­der input to the imple­men­ta­ti­on of this Regu­la­ti­on, both at natio­nal and Uni­on level, should be estab­lished. The deve­lo­p­ment of Uni­on exper­ti­se and capa­bi­li­ties should also include making use of exi­sting resour­ces and exper­ti­se, nota­b­ly through syn­er­gies with struc­tures built up in the con­text of the Uni­on level enforce­ment of other legis­la­ti­on and syn­er­gies with rela­ted initia­ti­ves at Uni­on level, such as the EuroHPC Joint Under­ta­king and the AI Test­ing and Expe­ri­men­ta­ti­on Faci­li­ties under the Digi­tal Euro­pe Pro­gram­me. (76) In order to faci­li­ta­te a smooth, effec­ti­ve and har­mo­ni­s­ed imple­men­ta­ti­on of this Regu­la­ti­on a Euro­pean Arti­fi­ci­al Intel­li­gence Board should be estab­lished. The Board should reflect the various inte­rests of the AI eco-system and be com­po­sed of repre­sen­ta­ti­ves of the Mem­ber Sta­tes. The Board should be respon­si­ble for a num­ber of advi­so­ry tasks, inclu­ding issuing opi­ni­ons, recom­men­da­ti­ons, advice or con­tri­bu­ting to gui­dance on mat­ters rela­ted to the imple­men­ta­ti­on of this Regu­la­ti­on, inclu­ding on enforce­ment mat­ters, tech­ni­cal spe­ci­fi­ca­ti­ons or exi­sting stan­dards regar­ding the requi­re­ments estab­lished in this Regu­la­ti­on and pro­vi­ding advice to the Com­mis­si­on and the Mem­ber Sta­tes and their natio­nal com­pe­tent aut­ho­ri­ties on spe­ci­fic que­sti­ons rela­ted to arti­fi­ci­al intel­li­gence. In order to give some fle­xi­bi­li­ty to Mem­ber Sta­tes in the desi­gna­ti­on of their repre­sen­ta­ti­ves in the AI Board, such repre­sen­ta­ti­ves may be any per­sons belon­ging to public enti­ties who should have the rele­vant com­pe­ten­ces and powers to faci­li­ta­te coor­di­na­ti­on at natio­nal level and con­tri­bu­te to the achie­ve­ment of the Board’s tasks. The Board should estab­lish two stan­ding sub-groups to pro­vi­de a plat­form for coope­ra­ti­on and exch­an­ge among mar­ket sur­veil­lan­ce aut­ho­ri­ties and noti­fy­ing aut­ho­ri­ties on issues rela­ted respec­tively to mar­ket sur­veil­lan­ce and noti­fi­ed bodies. The stan­ding sub­group for mar­ket sur­veil­lan­ce should act as the Admi­ni­stra­ti­ve Coope­ra­ti­on Group (ADCO) for this Regu­la­ti­on in the mea­ning of Artic­le 30 of Regu­la­ti­on (EU) 2019/1020. In line with the role and tasks of the Com­mis­si­on pur­su­ant to Artic­le 33 of Regu­la­ti­on (EU) 2019/1020, the Com­mis­si­on should sup­port the acti­vi­ties of the stan­ding sub­group for mar­ket sur­veil­lan­ce by under­ta­king mar­ket eva­lua­tions or stu­dies, nota­b­ly with a view to iden­ti­fy­ing aspects of this Regu­la­ti­on requi­ring spe­ci­fic and urgent coor­di­na­ti­on among mar­ket sur­veil­lan­ce aut­ho­ri­ties. The Board may estab­lish other stan­ding or tem­po­ra­ry sub-groups as appro­pria­te for the pur­po­se of exami­ning spe­ci­fic issues. The Board should also coope­ra­te, as appro­pria­te, with rele­vant EU bodies, expert groups and net­works acti­ve in the con­text of rele­vant EU legis­la­ti­on, inclu­ding in par­ti­cu­lar tho­se acti­ve under rele­vant EU regu­la­ti­on on data, digi­tal pro­ducts and ser­vices. (76x) With a view to ensu­re the invol­vement of stake­hol­ders in the imple­men­ta­ti­on and appli­ca­ti­on of this Regu­la­ti­on, an advi­so­ry forum should be estab­lished to advi­se and pro­vi­de tech­ni­cal exper­ti­se to the Board and the Com­mis­si­on. To ensu­re a varied and balan­ced stake­hol­der repre­sen­ta­ti­on bet­ween com­mer­cial and non-com­mer­cial inte­rest and, within the cate­go­ry of com­mer­cial inte­rests, with regards to SMEs and other under­ta­kings, the advi­so­ry forum should com­pri­se inter alia indu­stry, start-ups, SMEs, aca­de­mia, civil socie­ty, inclu­ding social part­ners, as well as the Fun­da­men­tal Rights Agen­cy, Euro­pean Uni­on Agen­cy for Cyber­se­cu­ri­ty, the Euro­pean Com­mit­tee for Stan­dar­dizati­on (CEN), the Euro­pean Com­mit­tee for Elec­tro­tech­ni­cal Stan­dar­dizati­on (CENELEC) and the Euro­pean Tele­com­mu­ni­ca­ti­ons Stan­dards Insti­tu­te (ETSI). (76y) To sup­port the imple­men­ta­ti­on and enforce­ment of this Regu­la­ti­on, in par­ti­cu­lar the moni­to­ring acti­vi­ties of the AI Office as regards gene­ral-pur­po­se AI models, a sci­en­ti­fic panel of inde­pen­dent experts should be estab­lished. The inde­pen­dent experts con­sti­tu­ting the sci­en­ti­fic panel should be sel­ec­ted on the basis of up-to-date sci­en­ti­fic or tech­ni­cal exper­ti­se in the field of arti­fi­ci­al intel­li­gence and should per­form their tasks with impar­tia­li­ty, objec­ti­vi­ty and ensu­re the con­fi­den­tia­li­ty of infor­ma­ti­on and data obtai­ned in car­ry­ing out their tasks and acti­vi­ties. To allow rein­for­cing natio­nal capa­ci­ties neces­sa­ry for the effec­ti­ve enforce­ment of this Regu­la­ti­on, Mem­ber Sta­tes should be able to request sup­port from the pool of experts con­sti­tu­ting the sci­en­ti­fic panel for their enforce­ment acti­vi­ties. (76a) In order to sup­port ade­qua­te enforce­ment as regards AI systems and rein­force the capa­ci­ties of the Mem­ber Sta­tes, EU AI test­ing sup­port struc­tures should be estab­lished and made available to the Mem­ber Sta­tes. (77) Mem­ber Sta­tes hold a key role in the appli­ca­ti­on and enforce­ment of this Regu­la­ti­on. In this respect, each Mem­ber Sta­te should desi­gna­te at least one noti­fy­ing aut­ho­ri­ty and at least one mar­ket sur­veil­lan­ce aut­ho­ri­ty as natio­nal com­pe­tent aut­ho­ri­ties for the pur­po­se of super­vi­sing the appli­ca­ti­on and imple­men­ta­ti­on of this Regu­la­ti­on. Mem­ber Sta­tes may deci­de to appoint any kind of public enti­ty to per­form the tasks of the natio­nal com­pe­tent aut­ho­ri­ties within the mea­ning of this Regu­la­ti­on, in accordance with their spe­ci­fic natio­nal orga­ni­sa­tio­nal cha­rac­te­ri­stics and needs. In order to increa­se orga­ni­sa­ti­on effi­ci­en­cy on the side of Mem­ber Sta­tes and to set a sin­gle point of cont­act vis-à-vis the public and other coun­ter­parts at Mem­ber Sta­te and Uni­on levels, each Mem­ber Sta­te should desi­gna­te a mar­ket sur­veil­lan­ce aut­ho­ri­ty to act as sin­gle point of cont­act. (77a) The natio­nal com­pe­tent aut­ho­ri­ties should exer­cise their powers inde­pendent­ly, impar­ti­al­ly and wit­hout bias, so as to safe­guard the prin­ci­ples of objec­ti­vi­ty of their acti­vi­ties and tasks and to ensu­re the appli­ca­ti­on and imple­men­ta­ti­on of this Regu­la­ti­on. The mem­bers of the­se aut­ho­ri­ties should refrain from any action incom­pa­ti­ble with their duties and should be sub­ject to con­fi­den­tia­li­ty rules under this Regu­la­ti­on. (78) In order to ensu­re that pro­vi­ders of high-risk AI systems can take into account the expe­ri­ence on the use of high-risk AI systems for impro­ving their systems and the design and deve­lo­p­ment pro­cess or can take any pos­si­ble cor­rec­ti­ve action in a time­ly man­ner, all pro­vi­ders should have a post-mar­ket moni­to­ring system in place. Whe­re rele­vant, post- mar­ket moni­to­ring should include an ana­ly­sis of the inter­ac­tion with other AI systems inclu­ding other devices and soft­ware. Post-mar­ket moni­to­ring should not cover sen­si­ti­ve ope­ra­tio­nal data of deployers which are law enforce­ment aut­ho­ri­ties. This system is also key to ensu­re that the pos­si­ble risks emer­ging from AI systems which con­ti­n­ue to ‘learn’ after being pla­ced on the mar­ket or put into ser­vice can be more effi­ci­ent­ly and time­ly addres­sed. In this con­text, pro­vi­ders should also be requi­red to have a system in place to report to the rele­vant aut­ho­ri­ties any serious inci­dents resul­ting from the use of their AI systems, mea­ning inci­dent or mal­func­tio­ning lea­ding to death or serious dama­ge to health, serious and irrever­si­ble dis­rup­ti­on of the manage­ment and ope­ra­ti­on of cri­ti­cal infras­truc­tu­re, brea­ches of obli­ga­ti­ons under Uni­on law inten­ded to pro­tect fun­da­men­tal rights or serious dama­ge to pro­per­ty or the envi­ron­ment. (79) In order to ensu­re an appro­pria­te and effec­ti­ve enforce­ment of the requi­re­ments and obli­ga­ti­ons set out by this Regu­la­ti­on, which is Uni­on har­mo­ni­sa­ti­on legis­la­ti­on, the system of mar­ket sur­veil­lan­ce and com­pli­ance of pro­ducts estab­lished by Regu­la­ti­on (EU) 2019/1020 should app­ly in its enti­re­ty. Mar­ket sur­veil­lan­ce aut­ho­ri­ties desi­gna­ted pur­su­ant to this Regu­la­ti­on should have all enforce­ment powers under this Regu­la­ti­on and Regu­la­ti­on (EU) 2019/1020 and should exer­cise their powers and car­ry out their duties inde­pendent­ly, impar­ti­al­ly and wit­hout bias. Alt­hough the majo­ri­ty of AI systems are not sub­ject to spe­ci­fic requi­re­ments and obli­ga­ti­ons under this Regu­la­ti­on, mar­ket sur­veil­lan­ce aut­ho­ri­ties may take mea­su­res in rela­ti­on to all AI systems when they pre­sent a risk in accordance with this Regu­la­ti­on. Due to the spe­ci­fic natu­re of Uni­on insti­tu­ti­ons, agen­ci­es and bodies fal­ling within the scope of this Regu­la­ti­on, it is appro­pria­te to desi­gna­te the Euro­pean Data Pro­tec­tion Super­vi­sor as a com­pe­tent mar­ket sur­veil­lan­ce aut­ho­ri­ty for them. This should be wit­hout pre­ju­di­ce to the desi­gna­ti­on of natio­nal com­pe­tent aut­ho­ri­ties by the Mem­ber Sta­tes. Mar­ket sur­veil­lan­ce acti­vi­ties should not affect the abili­ty of the super­vi­sed enti­ties to car­ry out their tasks inde­pendent­ly, when such inde­pen­dence is requi­red by Uni­on law. (79a) This Regu­la­ti­on is wit­hout pre­ju­di­ce to the com­pe­ten­ces, tasks, powers and inde­pen­dence of rele­vant natio­nal public aut­ho­ri­ties or bodies which super­vi­se the appli­ca­ti­on of Uni­on law pro­tec­ting fun­da­men­tal rights, inclu­ding equa­li­ty bodies and data pro­tec­tion aut­ho­ri­ties. Whe­re neces­sa­ry for their man­da­te, tho­se natio­nal public aut­ho­ri­ties or bodies should also have access to any docu­men­ta­ti­on crea­ted under this Regu­la­ti­on. A spe­ci­fic safe­guard pro­ce­du­re should be set for ensu­ring ade­qua­te and time­ly enforce­ment against AI systems pre­sen­ting a risk to health, safe­ty and fun­da­men­tal rights. The pro­ce­du­re for such AI systems pre­sen­ting a risk should be applied to high-risk AI systems pre­sen­ting a risk, pro­hi­bi­ted systems which have been pla­ced on the mar­ket, put into ser­vice or used in vio­la­ti­on of the pro­hi­bi­ted prac­ti­ces laid down in this Regu­la­ti­on and AI systems which have been made available in vio­la­ti­on of the trans­pa­ren­cy requi­re­ments laid down in this Regu­la­ti­on and pre­sent a risk. (80) Uni­on legis­la­ti­on on finan­cial ser­vices inclu­des inter­nal gover­nan­ce and risk manage­ment rules and requi­re­ments which are appli­ca­ble to regu­la­ted finan­cial insti­tu­ti­ons in the cour­se of pro­vi­si­on of tho­se ser­vices, inclu­ding when they make use of AI systems. In order to ensu­re coher­ent appli­ca­ti­on and enforce­ment of the obli­ga­ti­ons under this Regu­la­ti­on and rele­vant rules and requi­re­ments of the Uni­on finan­cial ser­vices legis­la­ti­on, the com­pe­tent aut­ho­ri­ties for the super­vi­si­on and enforce­ment of the finan­cial ser­vices legis­la­ti­on, nota­b­ly com­pe­tent aut­ho­ri­ties as defi­ned in Direc­ti­ve 2009/138/EC, Direc­ti­ve (EU) 2016/97, Direc­ti­ve 2013/36/EU Regu­la­ti­on (EU) No 575/2013, Direc­ti­ve 2008/48/EC and Direc­ti­ve 2014/17/EU of the Euro­pean Par­lia­ment and of the Coun­cil, should be desi­gna­ted, within their respec­ti­ve com­pe­ten­ces, as com­pe­tent aut­ho­ri­ties for the pur­po­se of super­vi­sing the imple­men­ta­ti­on of this Regu­la­ti­on, inclu­ding for mar­ket sur­veil­lan­ce acti­vi­ties, as regards AI systems pro­vi­ded or used by regu­la­ted and super­vi­sed finan­cial insti­tu­ti­ons unless Mem­ber Sta­tes deci­de to desi­gna­te ano­ther aut­ho­ri­ty to ful­fil the­se mar­ket sur­veil­lan­ce tasks. Tho­se com­pe­tent aut­ho­ri­ties should have all powers under this Regu­la­ti­on and Regu­la­ti­on (EU) 2019/1020 on mar­ket sur­veil­lan­ce to enforce the requi­re­ments and obli­ga­ti­ons of this Regu­la­ti­on, inclu­ding powers to car­ry our ex post mar­ket sur­veil­lan­ce acti­vi­ties that can be inte­gra­ted, as appro­pria­te, into their exi­sting super­vi­so­ry mecha­nisms and pro­ce­du­res under the rele­vant Uni­on finan­cial ser­vices legis­la­ti­on. It is appro­pria­te to envi­sa­ge that, when acting as mar­ket sur­veil­lan­ce aut­ho­ri­ties under this Regu­la­ti­on, the natio­nal aut­ho­ri­ties respon­si­ble for the super­vi­si­on of cre­dit insti­tu­ti­ons regu­la­ted under Direc­ti­ve 2013/36/EU, which are par­ti­ci­pa­ting in the Sin­gle Super­vi­so­ry Mecha­nism (SSM) estab­lished by Coun­cil Regu­la­ti­on No 1024/2013, should report, wit­hout delay, to the Euro­pean Cen­tral Bank any infor­ma­ti­on iden­ti­fi­ed in the cour­se of their mar­ket sur­veil­lan­ce acti­vi­ties that may be of poten­ti­al inte­rest for the Euro­pean Cen­tral Bank’s pru­den­ti­al super­vi­so­ry tasks as spe­ci­fi­ed in that Regu­la­ti­on. To fur­ther enhan­ce the con­si­sten­cy bet­ween this Regu­la­ti­on and the rules appli­ca­ble to cre­dit insti­tu­ti­ons regu­la­ted under Direc­ti­ve 2013/36/EU of the Euro­pean Par­lia­ment and of the Council27, it is also appro­pria­te to inte­gra­te some of the pro­vi­ders’ pro­ce­du­ral obli­ga­ti­ons in rela­ti­on to risk manage­ment, post mar­ke­ting moni­to­ring and docu­men­ta­ti­on into the exi­sting obli­ga­ti­ons and pro­ce­du­res under Direc­ti­ve 2013/36/EU. In order to avo­id over­laps, limi­t­ed dero­ga­ti­ons should also be envi­sa­ged in rela­ti­on to the qua­li­ty manage­ment system of pro­vi­ders and the moni­to­ring obli­ga­ti­on pla­ced on deployers of high-risk AI systems to the ext­ent that the­se app­ly to cre­dit insti­tu­ti­ons regu­la­ted by Direc­ti­ve 2013/36/EU. The same regime should app­ly to insu­rance and re-insu­rance under­ta­kings and insu­rance hol­ding com­pa­nies under Direc­ti­ve 2009/138/EU (Sol­ven­cy II) and the insu­rance inter­me­dia­ries under Direc­ti­ve 2016/97/EU and other types of finan­cial insti­tu­ti­ons sub­ject to requi­re­ments regar­ding inter­nal gover­nan­ce, arran­ge­ments or pro­ce­s­ses estab­lished pur­su­ant to the rele­vant Uni­on finan­cial ser­vices legis­la­ti­on to ensu­re con­si­sten­cy and equal tre­at­ment in the finan­cial sec­tor. (80‑x) Each mar­ket sur­veil­lan­ce aut­ho­ri­ty for high-risk AI systems listed in point 1 of Annex III inso­far as the­se systems are used for law enforce­ment pur­po­ses and for pur­po­ses listed in points 6, 7 and 8 of Annex III should have effec­ti­ve inve­sti­ga­ti­ve and cor­rec­ti­ve powers, inclu­ding at least the power to obtain access to all per­so­nal data that are being pro­ce­s­sed and to all infor­ma­ti­on neces­sa­ry for the per­for­mance of its tasks. The mar­ket sur­veil­lan­ce aut­ho­ri­ties should be able to exer­cise their powers by acting with com­ple­te inde­pen­dence. Any limi­ta­ti­ons of their access to sen­si­ti­ve ope­ra­tio­nal data under this Regu­la­ti­on should be wit­hout pre­ju­di­ce to the powers con­fer­red to them by Direc­ti­ve 2016/680. No exclu­si­on on dis­clo­sing data to natio­nal data pro­tec­tion aut­ho­ri­ties under this Regu­la­ti­on should affect the cur­rent or future powers of tho­se aut­ho­ri­ties bey­ond the scope of this Regu­la­ti­on. (80x) The mar­ket sur­veil­lan­ce aut­ho­ri­ties of the Mem­ber Sta­tes and the Com­mis­si­on should be able to pro­po­se joint acti­vi­ties, inclu­ding joint inve­sti­ga­ti­ons, to be con­duc­ted by mar­ket sur­veil­lan­ce aut­ho­ri­ties or mar­ket sur­veil­lan­ce aut­ho­ri­ties joint­ly with the Com­mis­si­on, that have the aim of pro­mo­ting com­pli­ance, iden­ti­fy­ing non-com­pli­ance, rai­sing awa­re­ness and pro­vi­ding gui­dance in rela­ti­on to this Regu­la­ti­on with respect to spe­ci­fic cate­go­ries of high-risk AI systems that are found to pre­sent a serious risk across seve­ral Mem­ber Sta­tes. Joint acti­vi­ties to pro­mo­te com­pli­ance should be car­ri­ed out in accordance with Artic­le 9 of the 2019/1020. The AI Office should pro­vi­de coor­di­na­ti­on sup­port for joint inve­sti­ga­ti­ons. (80y) It is neces­sa­ry to cla­ri­fy the respon­si­bi­li­ties and com­pe­ten­ces on natio­nal and Uni­on level as regards AI systems that are built on gene­ral-pur­po­se AI models. To avo­id over­lap­ping com­pe­ten­ces, whe­re an AI system is based on a gene­ral-pur­po­se AI model and the model and system are pro­vi­ded by the same pro­vi­der, the super­vi­si­on should take place at Uni­on level through the AI Office, which should have the powers of a mar­ket sur­veil­lan­ce aut­ho­ri­ty within the mea­ning of Regu­la­ti­on (EU) 2019/1020 for this pur­po­se. In all other cases, natio­nal mar­ket sur­veil­lan­ce aut­ho­ri­ties remain respon­si­ble for the super­vi­si­on of AI systems. Howe­ver, for gene­ral-pur­po­se AI systems that can be used direct­ly by deployers for at least one pur­po­se that is clas­si­fi­ed as high-risk, mar­ket sur­veil­lan­ce aut­ho­ri­ties should coope­ra­te with the AI Office to car­ry out eva­lua­tions of com­pli­ance and inform the Board and other mar­ket sur­veil­lan­ce aut­ho­ri­ties accor­din­gly. Fur­ther­mo­re, mar­ket sur­veil­lan­ce aut­ho­ri­ties should be able to request assi­stance from the AI Office whe­re the mar­ket sur­veil­lan­ce aut­ho­ri­ty is unable to con­clude an inve­sti­ga­ti­on on a high-risk AI system becau­se of its ina­bi­li­ty to access cer­tain infor­ma­ti­on rela­ted to the gene­ral-pur­po­se AI model on which the high-risk AI system is built. In such cases, the pro­ce­du­re regar­ding mutu­al assi­stance in cross-bor­der cases in Chap­ter VI of Regu­la­ti­on (EU) 2019/1020 should app­ly by ana­lo­gy. (80z) To make best use of the cen­tra­li­sed Uni­on exper­ti­se and syn­er­gies at Uni­on level, the powers of super­vi­si­on and enforce­ment of the obli­ga­ti­ons on pro­vi­ders of gene­ral-pur­po­se AI models should be a com­pe­tence of the Com­mis­si­on. The Com­mis­si­on should ent­rust the imple­men­ta­ti­on of the­se tasks to the AI Office, wit­hout pre­ju­di­ce to the powers of orga­ni­sa­ti­on of the Com­mis­si­on and the divi­si­on of com­pe­ten­ces bet­ween mem­ber Sta­tes and the Uni­on based on the Trea­ties. The AI Office should be able to car­ry out all neces­sa­ry actions to moni­tor the effec­ti­ve imple­men­ta­ti­on of this Regu­la­ti­on as regards gene­ral-pur­po­se AI models. It should be able to inve­sti­ga­te pos­si­ble inf­rin­ge­ments of the rules on pro­vi­ders of gene­ral-pur­po­se AI models both on its own initia­ti­ve, fol­lo­wing the results of its moni­to­ring acti­vi­ties, or upon request from mar­ket sur­veil­lan­ce aut­ho­ri­ties in line with the con­di­ti­ons set out in this Regu­la­ti­on. To sup­port effec­ti­ve moni­to­ring of the AI Office, it should pro­vi­de for the pos­si­bi­li­ty that down­stream pro­vi­ders lodge com­plaints about pos­si­ble inf­rin­ge­ments of the rules on pro­vi­ders of gene­ral pur­po­se AI models. (80z+1) With a view to com­ple­ment the gover­nan­ce systems for gene­ral-pur­po­se AI models, the sci­en­ti­fic panel should sup­port the moni­to­ring acti­vi­ties of the AI Office and may, in cer­tain cases, pro­vi­de qua­li­fi­ed alerts to the AI Office which trig­ger fol­low-ups such as inve­sti­ga­ti­ons. This should be the case whe­re the sci­en­ti­fic panel has rea­son to suspect that a gene­ral-pur­po­se AI model poses a con­cre­te and iden­ti­fia­ble risk at Uni­on level. Fur­ther­mo­re, this should be the case whe­re the sci­en­ti­fic panel has rea­son to suspect that a gene­ral-pur­po­se AI model meets the cri­te­ria that would lead to a clas­si­fi­ca­ti­on as gene­ral- pur­po­se AI model with syste­mic risk. To equip the sci­en­ti­fic panel with the infor­ma­ti­on neces­sa­ry for the per­for­mance of the­se tasks, the­re should be a mecha­nism wher­eby the sci­en­ti­fic panel can request the Com­mis­si­on to requi­re docu­men­ta­ti­on or infor­ma­ti­on from a pro­vi­der. (80z+2) The AI Office should be able to take the neces­sa­ry actions to moni­tor the effec­ti­ve imple­men­ta­ti­on of and com­pli­ance with the obli­ga­ti­ons for pro­vi­ders of gene­ral pur­po­se AI models laid down in this Regu­la­ti­on. The AI Office should be able to inve­sti­ga­te pos­si­ble inf­rin­ge­ments in accordance with the powers pro­vi­ded for in this Regu­la­ti­on, inclu­ding by reque­st­ing docu­men­ta­ti­on and infor­ma­ti­on, by con­duc­ting eva­lua­tions, as well as by reque­st­ing mea­su­res from pro­vi­ders of gene­ral pur­po­se AI models. In the con­duct of eva­lua­tions, in order to make use of inde­pen­dent exper­ti­se, the AI Office should be able to invol­ve inde­pen­dent experts to car­ry out the eva­lua­tions on its behalf. Com­pli­ance with the obli­ga­ti­ons should be enforceable, inter alia, through requests to take appro­pria­te mea­su­res, inclu­ding risk miti­ga­ti­on mea­su­res in case of iden­ti­fi­ed syste­mic risks as well as rest­ric­ting the making available on the mar­ket, with­dra­wing or recal­ling the model. As a safe­guard in case nee­ded bey­ond the pro­ce­du­ral rights pro­vi­ded for in this Regu­la­ti­on, pro­vi­ders of gene­ral-pur­po­se AI models should have the pro­ce­du­ral rights pro­vi­ded for in Artic­le 18 of Regu­la­ti­on (EU) 2019/1020, which should app­ly by ana­lo­gy, wit­hout pre­ju­di­ce to more spe­ci­fic pro­ce­du­ral rights pro­vi­ded for by this Regu­la­ti­on. (81) The deve­lo­p­ment of AI systems other than high-risk AI systems in accordance with the requi­re­ments of this Regu­la­ti­on may lead to a lar­ger upt­ake of ethi­cal and trust­wor­t­hy arti­fi­ci­al intel­li­gence in the Uni­on. Pro­vi­ders of non-high-risk AI systems should be encou­ra­ged to crea­te codes of con­duct, inclu­ding rela­ted gover­nan­ce mecha­nisms, inten­ded to foster the vol­un­t­a­ry appli­ca­ti­on of some or all of the man­da­to­ry requi­re­ments appli­ca­ble to high-risk AI systems, adapt­ed in light of the inten­ded pur­po­se of the systems and the lower risk invol­ved and taking into account the available tech­ni­cal solu­ti­ons and indu­stry best prac­ti­ces such as model and data cards. Pro­vi­ders and, as appro­pria­te, deployers of all AI systems, high-risk or not, and models should also be encou­ra­ged to app­ly on a vol­un­t­a­ry basis addi­tio­nal requi­re­ments rela­ted, for exam­p­le, to the ele­ments of the Euro­pean ethic gui­de­lines for trust­wor­t­hy AI, envi­ron­men­tal sus­taina­bi­li­ty, AI liter­a­cy mea­su­res, inclu­si­ve and diver­se design and deve­lo­p­ment of AI systems, inclu­ding atten­ti­on to vul­nerable per­sons and acce­s­si­bi­li­ty to per­sons with disa­bi­li­ty, stake­hol­ders’ par­ti­ci­pa­ti­on with the invol­vement as appro­pria­te, of rele­vant stake­hol­ders such as busi­ness and civil socie­ty orga­ni­sa­ti­ons, aca­de­mia and rese­arch orga­ni­sa­ti­ons, trade uni­ons and con­su­mer pro­tec­tion orga­ni­sa­ti­on in the design and deve­lo­p­ment of AI systems, and diver­si­ty of the deve­lo­p­ment teams, inclu­ding gen­der balan­ce. To ensu­re that the vol­un­t­a­ry codes of con­duct are effec­ti­ve, they should be based on clear objec­ti­ves and key per­for­mance indi­ca­tors to mea­su­re the achie­ve­ment of tho­se objec­ti­ves. They should be also deve­lo­ped in an inclu­si­ve way, as appro­pria­te, with the invol­vement of rele­vant stake­hol­ders such as busi­ness and civil socie­ty orga­ni­sa­ti­ons, aca­de­mia and rese­arch orga­ni­sa­ti­ons, trade uni­ons and con­su­mer pro­tec­tion orga­ni­sa­ti­on. The Com­mis­si­on may deve­lop initia­ti­ves, inclu­ding of a sec­to­ri­al natu­re, to faci­li­ta­te the lowe­ring of tech­ni­cal bar­riers hin­de­ring cross-bor­der exch­an­ge of data for AI deve­lo­p­ment, inclu­ding on data access infras­truc­tu­re, seman­tic and tech­ni­cal inter­ope­ra­bi­li­ty of dif­fe­rent types of data. (82) It is important that AI systems rela­ted to pro­ducts that are not high-risk in accordance with this Regu­la­ti­on and thus are not requi­red to com­ply with the requi­re­ments set out for high- risk AI systems are nevert­hel­ess safe when pla­ced on the mar­ket or put into ser­vice. To con­tri­bu­te to this objec­ti­ve, Regu­la­ti­on (EU) 2023/988 of the Euro­pean Par­lia­ment and of the Council28 would app­ly as a safe­ty net. (83) In order to ensu­re trustful and cons­truc­ti­ve coope­ra­ti­on of com­pe­tent aut­ho­ri­ties on Uni­on and natio­nal level, all par­ties invol­ved in the appli­ca­ti­on of this Regu­la­ti­on should respect the con­fi­den­tia­li­ty of infor­ma­ti­on and data obtai­ned in car­ry­ing out their tasks, in accordance with Uni­on or natio­nal law. They should car­ry out their tasks and acti­vi­ties in such a man­ner as to pro­tect, in par­ti­cu­lar, intellec­tu­al pro­per­ty rights, con­fi­den­ti­al busi­ness infor­ma­ti­on and trade secrets, the effec­ti­ve imple­men­ta­ti­on of this Regu­la­ti­on, public and natio­nal secu­ri­ty inte­rests, the inte­gri­ty of cri­mi­nal or admi­ni­stra­ti­ve pro­ce­e­dings, and the inte­gri­ty of clas­si­fi­ed infor­ma­ti­on. (84) Com­pli­ance with this Regu­la­ti­on should be enforceable by means of the impo­si­ti­on of pen­al­ties and other enforce­ment mea­su­res. Mem­ber Sta­tes should take all neces­sa­ry mea­su­res to ensu­re that the pro­vi­si­ons of this Regu­la­ti­on are imple­men­ted, inclu­ding by lay­ing down effec­ti­ve, pro­por­tio­na­te and dissua­si­ve pen­al­ties for their inf­rin­ge­ment, and in respect of the ne bis in idem prin­ci­ple. In order to streng­then and har­mo­ni­se admi­ni­stra­ti­ve pen­al­ties for inf­rin­ge­ment of this Regu­la­ti­on, the upper limits for set­ting the admi­ni­stra­ti­ve fines for cer­tain spe­ci­fic inf­rin­ge­ments should be laid down. When asses­sing the amount of the fines, Mem­ber Sta­tes should, in each indi­vi­du­al case, take into account all rele­vant cir­cum­stances of the spe­ci­fic situa­ti­on, with due regard in par­ti­cu­lar to the natu­re, gra­vi­ty and dura­ti­on of the inf­rin­ge­ment and of its con­se­quen­ces and to the provider’s size, in par­ti­cu­lar if the pro­vi­der is an SME inclu­ding a start-up. The Euro­pean Data Pro­tec­tion Super­vi­sor should have the power to impo­se fines on Uni­on insti­tu­ti­ons, agen­ci­es and bodies fal­ling within the scope of this Regu­la­ti­on. (84a) Com­pli­ance with the obli­ga­ti­ons on pro­vi­ders of gene­ral-pur­po­se AI models impo­sed under this Regu­la­ti­on should be enforceable among others by means of fines. To that end, appro­pria­te levels of fines should also be laid down for inf­rin­ge­ment of tho­se obli­ga­ti­ons, inclu­ding the fail­ure to com­ply with mea­su­res reque­sted by the Com­mis­si­on in accordance with this Regu­la­ti­on, sub­ject to appro­pria­te limi­ta­ti­on peri­ods in accordance with the prin­ci­ple of pro­por­tio­na­li­ty. All decis­i­ons taken by the Com­mis­si­on under this Regu­la­ti­on are sub­ject to review by the Court of Justi­ce of the Euro­pean Uni­on in accordance with the TFEU. (84aa) Uni­on and natio­nal law alre­a­dy pro­vi­des effec­ti­ve reme­dies to natu­ral and legal per­sons who­se rights and free­doms are adver­se­ly affec­ted by the use of AI systems. Wit­hout pre­ju­di­ce to tho­se reme­dies, any natu­ral or legal per­son having grounds to con­sider that the­re has been an inf­rin­ge­ment of the pro­vi­si­ons of this Regu­la­ti­on should be entit­led to lodge a com­plaint to the rele­vant mar­ket sur­veil­lan­ce aut­ho­ri­ty or the AI Office whe­re appli­ca­ble. (84b) Affec­ted per­sons should have the right to request an expl­ana­ti­on when a decis­i­on is taken by the deployer with the out­put from cer­tain high-risk systems as pro­vi­ded for in this Regu­la­ti­on as the main basis and which pro­du­ces legal effects or simi­lar­ly signi­fi­cant­ly affects him or her in a way that they con­sider to adver­se­ly impact their health, safe­ty or fun­da­men­tal rights. This expl­ana­ti­on should be a clear and meaningful and should pro­vi­de a basis for affec­ted per­sons to exer­cise their rights. This should not app­ly to the use of AI systems for which excep­ti­ons or rest­ric­tions fol­low from Uni­on or natio­nal law and should app­ly only to the ext­ent this right is not alre­a­dy pro­vi­ded for under Uni­on legis­la­ti­on. (84c) Per­sons acting as ‘whist­le-blo­wers’ on the brea­ches of this Regu­la­ti­on should be affor­ded the pro­tec­tion gua­ran­teed by Uni­on legis­la­ti­on on the pro­tec­tion of per­sons who report brea­ches of law. The­r­e­fo­re, Direc­ti­ve (EU) 2019/1937 should app­ly to the report­ing of brea­ches of this Regu­la­ti­on and the pro­tec­tion of per­sons report­ing such brea­ches. (85) In order to ensu­re that the regu­la­to­ry frame­work can be adapt­ed whe­re neces­sa­ry, the power to adopt acts in accordance with Artic­le 290 TFEU should be dele­ga­ted to the Com­mis­si­on to amend the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Annex II, the high-risk AI systems listed in Annex III, the pro­vi­si­ons regar­ding tech­ni­cal docu­men­ta­ti­on listed in Annex IV, the con­tent of the EU decla­ra­ti­on of con­for­mi­ty in Annex V, the pro­vi­si­ons regar­ding the con­for­mi­ty assess­ment pro­ce­du­res in Annex VI and VII, the pro­vi­si­ons estab­li­shing the high-risk AI systems to which the con­for­mi­ty assess­ment pro­ce­du­re based on assess­ment of the qua­li­ty manage­ment system and assess­ment of the tech­ni­cal docu­men­ta­ti­on should app­ly, the thres­hold as well as to sup­ple­ment bench­marks and indi­ca­tors in the rules for clas­si­fi­ca­ti­on of gene­ral-pur­po­se AI models with syste­mic risk, the cri­te­ria for the desi­gna­ti­on of gene­ral-pur­po­se AI models with syste­mic risk in Annex IXc, the tech­ni­cal docu­men­ta­ti­on for pro­vi­ders of gene­ral-pur­po­se AI models in Annex VIIIb and the trans­pa­ren­cy infor­ma­ti­on for pro­vi­ders of gene­ral-pur­po­se AI models in Annex VII­Ic. It is of par­ti­cu­lar importance that the Com­mis­si­on car­ry out appro­pria­te con­sul­ta­ti­ons during its pre­pa­ra­to­ry work, inclu­ding at expert level, and that tho­se con­sul­ta­ti­ons be con­duc­ted in accordance with the prin­ci­ples laid down in the Inter­in­sti­tu­tio­nal Agree­ment of 13 April 2016 on Bet­ter Law-Makin­g1. In par­ti­cu­lar, to ensu­re equal par­ti­ci­pa­ti­on in the pre­pa­ra­ti­on of dele­ga­ted acts, the Euro­pean Par­lia­ment and the Coun­cil recei­ve all docu­ments at the same time as Mem­ber Sta­tes’ experts, and their experts syste­ma­ti­cal­ly have access to mee­tings of Com­mis­si­on expert groups deal­ing with the pre­pa­ra­ti­on of dele­ga­ted acts. (85a) Given the rapid tech­no­lo­gi­cal deve­lo­p­ments and the requi­red tech­ni­cal exper­ti­se in the effec­ti­ve appli­ca­ti­on of this Regu­la­ti­on, the Com­mis­si­on should eva­lua­te and review this Regu­la­ti­on by three years after the date of ent­ry into appli­ca­ti­on and every four years the­re­af­ter and report to the Euro­pean Par­lia­ment and the Coun­cil. In addi­ti­on, taking into account the impli­ca­ti­ons for the scope of this Regu­la­ti­on, the Com­mis­si­on should car­ry out an assess­ment of the need to amend the list in Annex III and the list of pro­hi­bi­ted prac­ti­ces once a year. Moreo­ver, by two years after ent­ry into appli­ca­ti­on and every four years the­re­af­ter, the Com­mis­si­on should eva­lua­te and report to the Euro­pean Par­lia­ment and to the Coun­cil on the need to amend the high-risk are­as in Annex III, the AI systems within the scope of the trans­pa­ren­cy obli­ga­ti­ons in Tit­le IV, the effec­ti­ve­ness of the super­vi­si­on and gover­nan­ce system and the pro­gress on the deve­lo­p­ment of stan­dar­di­sati­on deli­ver­a­bles on ener­gy effi­ci­ent deve­lo­p­ment of gene­ral-pur­po­se AI models, inclu­ding the need for fur­ther mea­su­res or actions. Final­ly, within two years after the ent­ry into appli­ca­ti­on and every three years the­re­af­ter, the Com­mis­si­on should eva­lua­te the impact and effec­ti­ve­ness of vol­un­t­a­ry codes of con­ducts to foster the appli­ca­ti­on of the requi­re­ments set out in Tit­le III, Chap­ter 2, for systems other than high-risk AI systems and pos­si­bly other addi­tio­nal requi­re­ments for such AI systems. (86) In order to ensu­re uni­form con­di­ti­ons for the imple­men­ta­ti­on of this Regu­la­ti­on, imple­men­ting powers should be con­fer­red on the Com­mis­si­on. Tho­se powers should be exer­cis­ed in accordance with Regu­la­ti­on (EU) No 182/2011 of the Euro­pean Par­lia­ment and of the Council1. (87) Sin­ce the objec­ti­ve of this Regu­la­ti­on can­not be suf­fi­ci­ent­ly achie­ved by the Mem­ber Sta­tes and can rather, by rea­son of the sca­le or effects of the action, be bet­ter achie­ved at Uni­on level, the Uni­on may adopt mea­su­res in accordance with the prin­ci­ple of sub­si­dia­ri­ty as set out in Artic­le 5 TEU. In accordance with the prin­ci­ple of pro­por­tio­na­li­ty as set out in that Artic­le, this Regu­la­ti­on does not go bey­ond what is neces­sa­ry in order to achie­ve that objec­ti­ve. (87a) In order to ensu­re legal cer­tain­ty, ensu­re an appro­pria­te adap­t­ati­on peri­od for ope­ra­tors and avo­id dis­rup­ti­on to the mar­ket, inclu­ding by ensu­ring con­ti­nui­ty of the use of AI systems, it is appro­pria­te that this Regu­la­ti­on applies to the high-risk AI systems that have been pla­ced on the mar­ket or put into ser­vice befo­re the gene­ral date of appli­ca­ti­on the­reof, only if, from that date, tho­se systems are sub­ject to signi­fi­cant chan­ges in their design or inten­ded pur­po­se. It is appro­pria­te to cla­ri­fy that, in this respect, the con­cept of signi­fi­cant chan­ge should be under­s­tood as equi­va­lent in sub­stance to the noti­on of sub­stan­ti­al modi­fi­ca­ti­on, which is used with regard only to high-risk AI systems as defi­ned in this Regu­la­ti­on. By way of excep­ti­on and in light of public accoun­ta­bi­li­ty, ope­ra­tors of AI systems which are com­pon­ents of the lar­ge-sca­le IT systems estab­lished by the legal acts listed in Annex IX and ope­ra­tors of high-risk AI systems that are inten­ded to be used by public aut­ho­ri­ties should take the neces­sa­ry steps to com­ply with the requi­re­ments of this Regu­la­ti­on by end of 2030 and by four years after the ent­ry into appli­ca­ti­on respec­tively. (87b) Pro­vi­ders of high-risk AI systems are encou­ra­ged to start to com­ply, on vol­un­t­a­ry basis, with the rele­vant obli­ga­ti­ons fore­seen under this Regu­la­ti­on alre­a­dy during the tran­si­tio­nal peri­od. (88) This Regu­la­ti­on should app­ly from … [OP – plea­se insert the date estab­lished in Art. 85]. Howe­ver, taking into account the unac­cep­ta­ble risk asso­cia­ted with the use of AI in cer­tain ways, the pro­hi­bi­ti­ons should app­ly alre­a­dy from … [OP – plea­se insert the date – 6 months after ent­ry into force of this Regu­la­ti­on]. While the full effect of the­se pro­hi­bi­ti­ons fol­lows with the estab­lish­ment of the gover­nan­ce and enforce­ment of this Regu­la­ti­on, anti­ci­pa­ting the appli­ca­ti­on of the pro­hi­bi­ti­ons is important to take account of unac­cep­ta­ble risk and has effect on other pro­ce­du­res, such as in civil law. Moreo­ver, the infras­truc­tu­re rela­ted to the gover­nan­ce and the con­for­mi­ty assess­ment system should be ope­ra­tio­nal befo­re [OP – plea­se insert the date estab­lished in Art. 85], the­r­e­fo­re the pro­vi­si­ons on noti­fi­ed bodies and gover­nan­ce struc­tu­re should app­ly from … [OP – plea­se insert the date – twel­ve months fol­lo­wing the ent­ry into force of this Regu­la­ti­on]. Given the rapid pace of tech­no­lo­gi­cal advance­ments and adop­ti­on of gene­ral-pur­po­se AI models, obli­ga­ti­ons for pro­vi­ders of gene­ral pur­po­se AI models should app­ly within 12 months from the date of ent­ry into force. Codes of Prac­ti­ce should be rea­dy at the latest 3 months befo­re the ent­ry into appli­ca­ti­on of the rele­vant pro­vi­si­ons, to enable pro­vi­ders to demon­stra­te com­pli­ance in time. The AI Office should ensu­re that clas­si­fi­ca­ti­on rules and pro­ce­du­res are up to date in light of tech­no­lo­gi­cal deve­lo­p­ments. In addi­ti­on, Mem­ber Sta­tes should lay down and noti­fy to the Com­mis­si­on the rules on pen­al­ties, inclu­ding admi­ni­stra­ti­ve fines, and ensu­re that they are pro­per­ly and effec­tively imple­men­ted by the date of appli­ca­ti­on of this Regu­la­ti­on. The­r­e­fo­re, the pro­vi­si­ons on pen­al­ties should app­ly from [OP – plea­se insert the date – twel­ve months fol­lo­wing the ent­ry into force of this Regu­la­ti­on]. (89) The Euro­pean Data Pro­tec­tion Super­vi­sor and the Euro­pean Data Pro­tec­tion Board were con­sul­ted in accordance with Artic­le 42(2) of Regu­la­ti­on (EU) 2018/1725 and deli­ver­ed an opi­ni­on on 18 June 2021. 

TITLE I GENERAL PROVISIONS

Artic­le 1 – Sub­ject matter

1. The pur­po­se of this Regu­la­ti­on is to impro­ve the func­tio­ning of the inter­nal mar­ket and pro­mo­ting the upt­ake of human cen­tric and trust­wor­t­hy arti­fi­ci­al intel­li­gence, while ensu­ring a high level of pro­tec­tion of health, safe­ty, fun­da­men­tal rights enshri­ned in the Char­ter, inclu­ding demo­cra­cy, rule of law and envi­ron­men­tal pro­tec­tion against harmful effects of arti­fi­ci­al intel­li­gence systems in the Uni­on and sup­port­ing inno­va­ti­on. 2. This Regu­la­ti­on lays down: (a) har­mo­ni­s­ed rules for the pla­cing on the mar­ket, the put­ting into ser­vice and the use of arti­fi­ci­al intel­li­gence systems (‘AI systems’) in the Uni­on; (b) pro­hi­bi­ti­ons of cer­tain arti­fi­ci­al intel­li­gence prac­ti­ces; (c) spe­ci­fic requi­re­ments for high-risk AI systems and obli­ga­ti­ons for ope­ra­tors of such systems; (d) har­mo­ni­s­ed trans­pa­ren­cy rules for cer­tain AI systems; (da) har­mo­ni­s­ed rules for the pla­cing on the mar­ket of gene­ral-pur­po­se AI models; (e) rules on mar­ket moni­to­ring, mar­ket sur­veil­lan­ce gover­nan­ce and enforce­ment; (ea) mea­su­res to sup­port inno­va­ti­on, with a par­ti­cu­lar focus on SMEs, inclu­ding start-ups. 

Artic­le 2 – Scope

1. This Regu­la­ti­on applies to: (a) pro­vi­ders pla­cing on the mar­ket or put­ting into ser­vice AI systems or pla­cing on the mar­ket gene­ral-pur­po­se AI models in the Uni­on, irre­spec­ti­ve of whe­ther tho­se pro­vi­ders are estab­lished or who are loca­ted within the Uni­on or in a third coun­try; (b) deployers of AI systems that have their place of estab­lish­ment or who are loca­ted within the Uni­on; (c) pro­vi­ders and deployers of AI systems that have their place of estab­lish­ment or who are loca­ted in a third coun­try, whe­re the out­put pro­du­ced by the system is used in the Uni­on; (ca) importers and dis­tri­bu­tors of AI systems; (cb) pro­duct manu­fac­tu­r­ers pla­cing on the mar­ket or put­ting into ser­vice an AI system tog­e­ther with their pro­duct and under their own name or trade­mark; (cc) aut­ho­ri­sed repre­sen­ta­ti­ves of pro­vi­ders, which are not estab­lished in the Uni­on. (cc) affec­ted per­sons that are loca­ted in the Uni­on. 2. For AI systems clas­si­fi­ed as high-risk AI systems in accordance with Artic­les 6(1) and 6(2) rela­ted to pro­ducts cover­ed by Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Annex II, sec­tion B only Artic­le 84 of this Regu­la­ti­on shall app­ly. Artic­le 53 shall app­ly only inso­far as the requi­re­ments for high-risk AI systems under this Regu­la­ti­on have been inte­gra­ted under that Uni­on har­mo­ni­sa­ti­on legis­la­ti­on. 3. This Regu­la­ti­on shall not app­ly to are­as out­side the scope of EU law and in any event shall not affect the com­pe­ten­ces of the Mem­ber Sta­tes con­cer­ning natio­nal secu­ri­ty, regard­less of the type of enti­ty ent­ru­sted by the Mem­ber Sta­tes to car­ry out the tasks in rela­ti­on to tho­se com­pe­ten­ces. This Regu­la­ti­on shall not app­ly to AI systems if and inso­far pla­ced on the mar­ket, put into ser­vice, or used with or wit­hout modi­fi­ca­ti­on of such systems exclu­si­ve­ly for mili­ta­ry, defence or natio­nal secu­ri­ty pur­po­ses, regard­less of the type of enti­ty car­ry­ing out tho­se acti­vi­ties. This Regu­la­ti­on shall not app­ly to AI systems which are not pla­ced on the mar­ket or put into ser­vice in the Uni­on, whe­re the out­put is used in the Uni­on exclu­si­ve­ly for mili­ta­ry, defence or natio­nal secu­ri­ty pur­po­ses, regard­less of the type of enti­ty car­ry­ing out tho­se acti­vi­ties. 4. This Regu­la­ti­on shall not app­ly to public aut­ho­ri­ties in a third coun­try nor to inter­na­tio­nal orga­ni­sa­ti­ons fal­ling within the scope of this Regu­la­ti­on pur­su­ant to para­graph 1, whe­re tho­se aut­ho­ri­ties or orga­ni­sa­ti­ons use AI systems in the frame­work of inter­na­tio­nal coope­ra­ti­on or agree­ments for law enforce­ment and judi­cial coope­ra­ti­on with the Uni­on or with one or more Mem­ber Sta­tes, under the con­di­ti­on that this third coun­try or inter­na­tio­nal orga­ni­sa­ti­ons pro­vi­de ade­qua­te safe­guards with respect to the pro­tec­tion of fun­da­men­tal rights and free­doms of indi­vi­du­als. 5. This Regu­la­ti­on shall not affect the appli­ca­ti­on of the pro­vi­si­ons on the lia­bi­li­ty of inter­me­dia­ry ser­vice pro­vi­ders set out in Chap­ter II, Sec­tion 4 of Direc­ti­ve 2000/31/EC of the Euro­pean Par­lia­ment and of the Council29 [as to be repla­ced by the cor­re­spon­ding pro­vi­si­ons of the Digi­tal Ser­vices Act]. 5a. This Regu­la­ti­on shall not app­ly to AI systems and models, inclu­ding their out­put, spe­ci­fi­cal­ly deve­lo­ped and put into ser­vice for the sole pur­po­se of sci­en­ti­fic rese­arch and deve­lo­p­ment. 5a. Uni­on law on the pro­tec­tion of per­so­nal data, pri­va­cy and the con­fi­den­tia­li­ty of com­mu­ni­ca­ti­ons applies to per­so­nal data pro­ce­s­sed in con­nec­tion with the rights and obli­ga­ti­ons laid down in this Regu­la­ti­on. This Regu­la­ti­on shall not affect Regu­la­ti­ons (EU) 2016/679 and (EU) 2018/1725 and Direc­ti­ves 2002/58/EC and (EU) 2016/680, wit­hout pre­ju­di­ce to arran­ge­ments pro­vi­ded for in Artic­le 10(5) and Artic­le 54 of this Regu­la­ti­on. 5b. This Regu­la­ti­on shall not app­ly to any rese­arch, test­ing and deve­lo­p­ment acti­vi­ty regar­ding AI systems or models pri­or to being pla­ced on the mar­ket or put into ser­vice; tho­se acti­vi­ties shall be con­duc­ted respec­ting appli­ca­ble Uni­on law. The test­ing in real world con­di­ti­ons shall not be cover­ed by this exemp­ti­on. 5b. This Regu­la­ti­on is wit­hout pre­ju­di­ce to the rules laid down by other Uni­on legal acts rela­ted to con­su­mer pro­tec­tion and pro­duct safe­ty. 5c. This Regu­la­ti­on shall not app­ly to obli­ga­ti­ons of deployers who are natu­ral per­sons using AI systems in the cour­se of a purely per­so­nal non-pro­fes­sio­nal acti­vi­ty. 5e. This Regu­la­ti­on shall not pre­clude Mem­ber Sta­tes or the Uni­on from main­tai­ning or intro­du­cing laws, regu­la­ti­ons or admi­ni­stra­ti­ve pro­vi­si­ons which are more favoura­ble to workers in terms of pro­tec­ting their rights in respect of the use of AI systems by employers, or to encou­ra­ge or allow the appli­ca­ti­on of coll­ec­ti­ve agree­ments which are more favoura­ble to workers. 5g. The obli­ga­ti­ons laid down in this Regu­la­ti­on shall not app­ly to AI systems released under free and open source licen­ces unless they are pla­ced on the mar­ket or put into ser­vice as high-risk AI systems or an AI system that falls under Tit­le II and IV. 

Artic­le 3 – Definitions

For the pur­po­se of this Regu­la­ti­on, the fol­lo­wing defi­ni­ti­ons app­ly: (1) ‘AI system‘ is a machi­ne-based system desi­gned to ope­ra­te with vary­ing levels of auto­no­my and that may exhi­bit adap­ti­ve­ness after deployment and that, for expli­cit or impli­cit objec­ti­ves, infers, from the input it recei­ves, how to gene­ra­te out­puts such as pre­dic­tions, con­tent, recom­men­da­ti­ons, or decis­i­ons that can influence phy­si­cal or vir­tu­al envi­ron­ments; (1a) ‘risk’ means the com­bi­na­ti­on of the pro­ba­bi­li­ty of an occur­rence of harm and the seve­ri­ty of that harm; (2) ‘pro­vi­der’ means a natu­ral or legal per­son, public aut­ho­ri­ty, agen­cy or other body that deve­lo­ps an AI system or a gene­ral pur­po­se AI model or that has an AI system or a gene­ral pur­po­se AI model deve­lo­ped and places them on the mar­ket or puts the system into ser­vice under its own name or trade­mark, whe­ther for payment or free of char­ge; (4) ‘deployer means any natu­ral or legal per­son, public aut­ho­ri­ty, agen­cy or other body using an AI system under its aut­ho­ri­ty except whe­re the AI system is used in the cour­se of a per­so­nal non-pro­fes­sio­nal acti­vi­ty; (5) ‘aut­ho­ri­sed repre­sen­ta­ti­ve’ means any natu­ral or legal per­son loca­ted or estab­lished in the Uni­on who has recei­ved and accept­ed a writ­ten man­da­te from a pro­vi­der of an AI system or a gene­ral-pur­po­se AI model to, respec­tively, per­form and car­ry out on its behalf the obli­ga­ti­ons and pro­ce­du­res estab­lished by this Regu­la­ti­on; (6) ‘importer’ means any natu­ral or legal per­son loca­ted or estab­lished in the Uni­on that places on the mar­ket an AI system that bears the name or trade­mark of a natu­ral or legal per­son estab­lished out­side the Uni­on; (7) ‘dis­tri­bu­tor’ means any natu­ral or legal per­son in the sup­p­ly chain, other than the pro­vi­der or the importer, that makes an AI system available on the Uni­on mar­ket; (8) ‘ope­ra­tor’ means the pro­vi­der, the pro­duct manu­fac­tu­rer, the deployer, the aut­ho­ri­sed repre­sen­ta­ti­ve, the importer or the dis­tri­bu­tor; (9) ‘pla­cing on the mar­ket’ means the first making available of an AI system or a gene­ral pur­po­se AI model on the Uni­on mar­ket; (10) ‘making available on the mar­ket’ means any sup­p­ly of an AI system or a gene­ral pur­po­se AI model for dis­tri­bu­ti­on or use on the Uni­on mar­ket in the cour­se of a com­mer­cial acti­vi­ty, whe­ther in return for payment or free of char­ge; (11) ‘put­ting into ser­vice’ means the sup­p­ly of an AI system for first use direct­ly to the deployer or for own use in the Uni­on for its inten­ded pur­po­se; (12) ‘inten­ded pur­po­se’ means the use for which an AI system is inten­ded by the pro­vi­der, inclu­ding the spe­ci­fic con­text and con­di­ti­ons of use, as spe­ci­fi­ed in the infor­ma­ti­on sup­plied by the pro­vi­der in the ins­truc­tions for use, pro­mo­tio­nal or sales mate­ri­als and state­ments, as well as in the tech­ni­cal docu­men­ta­ti­on; (13) ‘rea­son­ab­ly fore­seeable misu­se’ means the use of an AI system in a way that is not in accordance with its inten­ded pur­po­se, but which may result from rea­son­ab­ly fore­seeable human beha­viour or inter­ac­tion with other systems, inclu­ding other AI systems; (14) ‘safe­ty com­po­nent of a pro­duct or system’ means a com­po­nent of a pro­duct or of a system which ful­fils a safe­ty func­tion for that pro­duct or system, or the fail­ure or mal­func­tio­ning of which end­an­gers the health and safe­ty of per­sons or pro­per­ty; (15) ‘ins­truc­tions for use’ means the infor­ma­ti­on pro­vi­ded by the pro­vi­der to inform the user of in par­ti­cu­lar an AI system’s inten­ded pur­po­se and pro­per use; (16) ‘recall of an AI system’ means any mea­su­re aimed at achie­ving the return to the pro­vi­der or taking it out of ser­vice or dis­ab­ling the use of an AI system made available to deployers; (17) ‘with­dra­wal of an AI system’ means any mea­su­re aimed at pre­ven­ting an AI system in the sup­p­ly chain being made available on the mar­ket; (18) ‘per­for­mance of an AI system’ means the abili­ty of an AI system to achie­ve its inten­ded pur­po­se; (19) ‘noti­fy­ing aut­ho­ri­ty’ means the natio­nal aut­ho­ri­ty respon­si­ble for set­ting up and car­ry­ing out the neces­sa­ry pro­ce­du­res for the assess­ment, desi­gna­ti­on and noti­fi­ca­ti­on of con­for­mi­ty assess­ment bodies and for their moni­to­ring; (20) ‘con­for­mi­ty assess­ment’ means the pro­cess of demon­st­ra­ting whe­ther the requi­re­ments set out in Tit­le III, Chap­ter 2 of this Regu­la­ti­on rela­ting to a high-risk AI system have been ful­fil­led; (21) ‘con­for­mi­ty assess­ment body’ means a body that per­forms third-par­ty con­for­mi­ty assess­ment acti­vi­ties, inclu­ding test­ing, cer­ti­fi­ca­ti­on and inspec­tion; (22) ‘noti­fi­ed body’ means a con­for­mi­ty assess­ment body noti­fi­ed in accordance with this Regu­la­ti­on and other rele­vant Uni­on har­mo­ni­sa­ti­on legis­la­ti­on; (23) ‘sub­stan­ti­al modi­fi­ca­ti­on’ means a chan­ge to the AI system after its pla­cing on the mar­ket or put­ting into ser­vice which is not fore­seen or plan­ned in the initi­al con­for­mi­ty assess­ment by the pro­vi­der and as a result of which the com­pli­ance of the AI system with the requi­re­ments set out in Tit­le III, Chap­ter 2 of this Regu­la­ti­on is affec­ted or results in a modi­fi­ca­ti­on to the inten­ded pur­po­se for which the AI system has been asses­sed; (24) ‘CE mar­king of con­for­mi­ty’ (CE mar­king) means a mar­king by which a pro­vi­der indi­ca­tes that an AI system is in con­for­mi­ty with the requi­re­ments set out in Tit­le III, Chap­ter 2 of this Regu­la­ti­on and other appli­ca­ble Uni­on legis­la­ti­on har­mo­ni­s­ing the con­di­ti­ons for the mar­ke­ting of pro­ducts (‘Uni­on har­mo­ni­sa­ti­on legis­la­ti­on’) pro­vi­ding for its affixing; (25) ‘post-mar­ket moni­to­ring system’ means all acti­vi­ties car­ri­ed out by pro­vi­ders of AI systems to coll­ect and review expe­ri­ence gai­ned from the use of AI systems they place on the mar­ket or put into ser­vice for the pur­po­se of iden­ti­fy­ing any need to imme­dia­te­ly app­ly any neces­sa­ry cor­rec­ti­ve or pre­ven­ti­ve actions; (26) ‘mar­ket sur­veil­lan­ce aut­ho­ri­ty’ means the natio­nal aut­ho­ri­ty car­ry­ing out the acti­vi­ties and taking the mea­su­res pur­su­ant to Regu­la­ti­on (EU) 2019/1020; (27) ‘har­mo­ni­s­ed stan­dard’ means a Euro­pean stan­dard as defi­ned in Artic­le 2(1)(c) of Regu­la­ti­on (EU) No 1025/2012; (28) ‘com­mon spe­ci­fi­ca­ti­on’ means a set of tech­ni­cal spe­ci­fi­ca­ti­ons, as defi­ned in point 4 of Artic­le 2 of Regu­la­ti­on (EU) No 1025/2012 pro­vi­ding means to com­ply with cer­tain requi­re­ments estab­lished under this Regu­la­ti­on; (29) ‘trai­ning data’ means data used for trai­ning an AI system through fit­ting its lear­nable para­me­ters; (30) ‘vali­da­ti­on data’ means data used for pro­vi­ding an eva­lua­ti­on of the trai­ned AI system and for tuning its non-lear­nable para­me­ters and its lear­ning pro­cess, among other things, in order to pre­vent under­fit­ting or over­fit­ting; whe­re­as the vali­da­ti­on data­set is a sepa­ra­te data­set or part of the trai­ning data­set, eit­her as a fixed or varia­ble split; (31) ‘test­ing data’ means data used for pro­vi­ding an inde­pen­dent eva­lua­ti­on of the AI system in order to con­firm the expec­ted per­for­mance of that system befo­re its pla­cing on the mar­ket or put­ting into ser­vice; (32) ‘input data’ means data pro­vi­ded to or direct­ly acqui­red by an AI system on the basis of which the system pro­du­ces an out­put; (33) ‘bio­me­tric data’ means per­so­nal data resul­ting from spe­ci­fic tech­ni­cal pro­ce­s­sing rela­ting to the phy­si­cal, phy­sio­lo­gi­cal or beha­viou­ral cha­rac­te­ri­stics of a natu­ral per­son, such as facial images or dac­ty­lo­s­co­pic data; (33a)‘bio­me­tric iden­ti­fi­ca­ti­on’ means the auto­ma­ted reco­gni­ti­on of phy­si­cal, phy­sio­lo­gi­cal, beha­viou­ral, and psy­cho­lo­gi­cal human fea­tures for the pur­po­se of estab­li­shing an individual’s iden­ti­ty by com­pa­ring bio­me­tric data of that indi­vi­du­al to stored bio­me­tric data of indi­vi­du­als in a data­ba­se; (33c)‘bio­me­tric veri­fi­ca­ti­on’ means the auto­ma­ted veri­fi­ca­ti­on of the iden­ti­ty of natu­ral per­sons by com­pa­ring bio­me­tric data of an indi­vi­du­al to pre­vious­ly pro­vi­ded bio­me­tric data (one-to-one veri­fi­ca­ti­on, inclu­ding authen­ti­ca­ti­on); (33d)‘spe­cial cate­go­ries of per­so­nal data’ means the cate­go­ries of per­so­nal data refer­red to in Artic­le 9(1) of Regu­la­ti­on (EU) 2016/679, Artic­le 10 of Direc­ti­ve (EU) 2016/680 and Artic­le 10(1) of Regu­la­ti­on (EU) 2018/1725; (33e)‘sen­si­ti­ve ope­ra­tio­nal data’ means ope­ra­tio­nal data rela­ted to acti­vi­ties of pre­ven­ti­on, detec­tion, inve­sti­ga­ti­on and pro­se­cu­ti­on of cri­mi­nal offen­ces, the dis­clo­sure of which can jeo­par­di­se the inte­gri­ty of cri­mi­nal pro­ce­e­dings; (34)‘emo­ti­on reco­gni­ti­on system’ means an AI system for the pur­po­se of iden­ti­fy­ing or infer­ring emo­ti­ons or inten­ti­ons of natu­ral per­sons on the basis of their bio­me­tric data; (35)‘bio­me­tric cate­go­ri­sa­ti­on system’ means an AI system for the pur­po­se of assig­ning natu­ral per­sons to spe­ci­fic cate­go­ries on the basis of their bio­me­tric data unless ancil­la­ry to ano­ther com­mer­cial ser­vice and strict­ly neces­sa­ry for objec­ti­ve tech­ni­cal rea­sons; (36)‘remo­te bio­me­tric iden­ti­fi­ca­ti­on system’ means an AI system for the pur­po­se of iden­ti­fy­ing natu­ral per­sons, wit­hout their acti­ve invol­vement, typi­cal­ly at a distance through the com­pa­ri­son of a person’s bio­me­tric data with the bio­me­tric data con­tai­ned in a refe­rence data­ba­se; (37)‘‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on system’ means a remo­te bio­me­tric iden­ti­fi­ca­ti­on system wher­eby the cap­tu­ring of bio­me­tric data, the com­pa­ri­son and the iden­ti­fi­ca­ti­on all occur wit­hout a signi­fi­cant delay. This com­pri­ses not only instant iden­ti­fi­ca­ti­on, but also limi­t­ed short delays in order to avo­id cir­cum­ven­ti­on; (38)‘‘post’ remo­te bio­me­tric iden­ti­fi­ca­ti­on system’ means a remo­te bio­me­tric iden­ti­fi­ca­ti­on system other than a ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on system; (39)‘publicly acce­s­si­ble space’ means any publicly or pri­va­te­ly owned phy­si­cal place acce­s­si­ble to an unde­ter­mi­ned num­ber of natu­ral per­sons, regard­less of whe­ther cer­tain con­di­ti­ons for access may app­ly, and regard­less of the poten­ti­al capa­ci­ty rest­ric­tions; (40)‘law enforce­ment aut­ho­ri­ty’ means: (a) any public aut­ho­ri­ty com­pe­tent for the pre­ven­ti­on, inve­sti­ga­ti­on, detec­tion or pro­se­cu­ti­on of cri­mi­nal offen­ces or the exe­cu­ti­on of cri­mi­nal pen­al­ties, inclu­ding the safe­guar­ding against and the pre­ven­ti­on of thre­ats to public secu­ri­ty; or (b) any other body or enti­ty ent­ru­sted by Mem­ber Sta­te law to exer­cise public aut­ho­ri­ty and public powers for the pur­po­ses of the pre­ven­ti­on, inve­sti­ga­ti­on, detec­tion or pro­se­cu­ti­on of cri­mi­nal offen­ces or the exe­cu­ti­on of cri­mi­nal pen­al­ties, inclu­ding the safe­guar­ding against and the pre­ven­ti­on of thre­ats to public secu­ri­ty; (41)‘law enforce­ment’ means acti­vi­ties car­ri­ed out by law enforce­ment aut­ho­ri­ties or on their behalf for the pre­ven­ti­on, inve­sti­ga­ti­on, detec­tion or pro­se­cu­ti­on of cri­mi­nal offen­ces or the exe­cu­ti­on of cri­mi­nal pen­al­ties, inclu­ding the safe­guar­ding against and the pre­ven­ti­on of thre­ats to public secu­ri­ty; (42)‘Arti­fi­ci­al Intel­li­gence Office’ means the Commission’s func­tion of con­tri­bu­ting to the imple­men­ta­ti­on, moni­to­ring and super­vi­si­on of AI systems, gene­ral pur­po­se AI models and AI gover­nan­ce. Refe­ren­ces in this Regu­la­ti­on to the Arti­fi­ci­al Intel­li­gence office shall be under­s­tood as refe­ren­ces to the Com­mis­si­on; (43)‘natio­nal com­pe­tent aut­ho­ri­ty’ means any of the fol­lo­wing: the noti­fy­ing aut­ho­ri­ty and the mar­ket sur­veil­lan­ce aut­ho­ri­ty. As regards AI systems put into ser­vice or used by EU insti­tu­ti­ons, agen­ci­es, offices and bodies, any refe­rence to natio­nal com­pe­tent aut­ho­ri­ties or mar­ket sur­veil­lan­ce aut­ho­ri­ties in this Regu­la­ti­on shall be under­s­tood as refer­ring to the Euro­pean Data Pro­tec­tion Super­vi­sor; (44)‘serious inci­dent’ means any inci­dent or mal­func­tio­ning of an AI system that direct­ly or indi­rect­ly leads to any of the fol­lo­wing: (a) the death of a per­son or serious dama­ge to a person’s health; (b) a serious and irrever­si­ble dis­rup­ti­on of the manage­ment and ope­ra­ti­on of cri­ti­cal infras­truc­tu­re; (ba) breach of obli­ga­ti­ons under Uni­on law inten­ded to pro­tect fun­da­men­tal rights; (bb) serious dama­ge to pro­per­ty or the envi­ron­ment. (44a)‘per­so­nal data’ means per­so­nal data as defi­ned in Artic­le 4, point (1) of Regu­la­ti­on (EU) 2016/679; (44c)‘non-per­so­nal data’ means data other than per­so­nal data as defi­ned in point (1) of Artic­le 4 of Regu­la­ti­on (EU) 2016/679; (be) ‘pro­fil­ing’ means any form of auto­ma­ted pro­ce­s­sing of per­so­nal data as defi­ned in point (4) of Artic­le 4 of Regu­la­ti­on (EU) 2016/679; or in the case of law enforce­ment aut­ho­ri­ties – in point 4 of Artic­le 3 of Direc­ti­ve (EU) 2016/680 or, in the case of Uni­on insti­tu­ti­ons, bodies, offices or agen­ci­es, in point 5 Artic­le 3 of Regu­la­ti­on (EU) 2018/1725; (bf) ‘real world test­ing plan’ means a docu­ment that descri­bes the objec­ti­ves, metho­do­lo­gy, geo­gra­phi­cal, popu­la­ti­on and tem­po­ral scope, moni­to­ring, orga­ni­sa­ti­on and con­duct of test­ing in real world con­di­ti­ons; (44 eb)‘sandbox plan’ means a docu­ment agreed bet­ween the par­ti­ci­pa­ting pro­vi­der and the com­pe­tent aut­ho­ri­ty describ­ing the objec­ti­ves, con­di­ti­ons, time­frame, metho­do­lo­gy and requi­re­ments for the acti­vi­ties car­ri­ed out within the sand­box; (bg) ‘AI regu­la­to­ry sand­box’ means a con­cre­te and con­trol­led frame­work set up by a com­pe­tent aut­ho­ri­ty which offers pro­vi­ders or pro­s­pec­ti­ve pro­vi­ders of AI systems the pos­si­bi­li­ty to deve­lop, train, vali­da­te and test, whe­re appro­pria­te in real world con­di­ti­ons, an inno­va­ti­ve AI system, pur­su­ant to a sand­box plan for a limi­t­ed time under regu­la­to­ry super­vi­si­on; (bh) ‘AI liter­a­cy’ refers to skills, know­ledge and under­stan­ding that allo­ws pro­vi­ders, users and affec­ted per­sons, taking into account their respec­ti­ve rights and obli­ga­ti­ons in the con­text of this Regu­la­ti­on, to make an infor­med deployment of AI systems, as well as to gain awa­re­ness about the oppor­tu­ni­ties and risks of AI and pos­si­ble harm it can cau­se; (bi) ‘test­ing in real world con­di­ti­ons’ means the tem­po­ra­ry test­ing of an AI system for its inten­ded pur­po­se in real world con­di­ti­ons out­side of a labo­ra­to­ry or other­wi­se simu­la­ted envi­ron­ment with a view to gathe­ring relia­ble and robust data and to asses­sing and veri­fy­ing the con­for­mi­ty of the AI system with the requi­re­ments of this Regu­la­ti­on; test­ing in real world con­di­ti­ons shall not be con­side­red as pla­cing the AI system on the mar­ket or put­ting it into ser­vice within the mea­ning of this Regu­la­ti­on, pro­vi­ded that all con­di­ti­ons under Artic­le 53 or Artic­le 54a are ful­fil­led; (bj) ‘sub­ject’ for the pur­po­se of real world test­ing means a natu­ral per­son who par­ti­ci­pa­tes in test­ing in real world con­di­ti­ons; (bk) ‘infor­med con­sent’ means a subject’s free­ly given, spe­ci­fic, unam­bi­guous and vol­un­t­a­ry expres­si­on of his or her wil­ling­ness to par­ti­ci­pa­te in a par­ti­cu­lar test­ing in real world con­di­ti­ons, after having been infor­med of all aspects of the test­ing that are rele­vant to the subject’s decis­i­on to par­ti­ci­pa­te; (bl) “deep fake” means AI gene­ra­ted or mani­pu­la­ted image, audio or video con­tent that resem­bles exi­sting per­sons, objects, places or other enti­ties or events and would fal­se­ly appear to a per­son to be authen­tic or truthful; (44e)‘wide­spread inf­rin­ge­ment’ means any act or omis­si­on con­tra­ry to Uni­on law that pro­tects the inte­rest of indi­vi­du­als: (a) which has har­med or is likely to harm the coll­ec­ti­ve inte­rests of indi­vi­du­als resi­ding in at least two Mem­ber Sta­tes other than the Mem­ber Sta­te, in which: (i) the act or omis­si­on ori­gi­na­ted or took place; (ii) the pro­vi­der con­cer­ned, or, whe­re appli­ca­ble, its aut­ho­ri­sed repre­sen­ta­ti­ve is estab­lished; or (iii) the deployer is estab­lished, when the inf­rin­ge­ment is com­mit­ted by the deployer; (b) which pro­tects the inte­rests of indi­vi­du­als, that have cau­sed, cau­se or are likely to cau­se harm to the coll­ec­ti­ve inte­rests of indi­vi­du­als and that have com­mon fea­tures, inclu­ding the same unlawful prac­ti­ce, the same inte­rest being inf­rin­ged and that are occur­ring con­curr­ent­ly, com­mit­ted by the same ope­ra­tor, in at least three Mem­ber Sta­tes; (44h)‘cri­ti­cal infras­truc­tu­re’ means an asset, a faci­li­ty, equip­ment, a net­work or a system, or a part of the­reof, which is neces­sa­ry for the pro­vi­si­on of an essen­ti­al ser­vice within the mea­ning of Artic­le 2(4) of Direc­ti­ve (EU) 2022/2557; (44b)‘gene­ral pur­po­se AI model’ means an AI model, inclu­ding when trai­ned with a lar­ge amount of data using self-super­vi­si­on at sca­le, that dis­plays signi­fi­cant gene­ra­li­ty and is capa­ble to com­pe­tent­ly per­form a wide ran­ge of distinct tasks regard­less of the way the model is pla­ced on the mar­ket and that can be inte­gra­ted into a varie­ty of down­stream systems or appli­ca­ti­ons. This does not cover AI models that are used befo­re release on the mar­ket for rese­arch, deve­lo­p­ment and pro­to­ty­p­ing acti­vi­ties; (44c)‘high-impact capa­bi­li­ties’ in gene­ral pur­po­se AI models means capa­bi­li­ties that match or exce­ed the capa­bi­li­ties recor­ded in the most advan­ced gene­ral pur­po­se AI models; (44d)‘syste­mic risk at Uni­on level’ means a risk that is spe­ci­fic to the high-impact capa­bi­li­ties of gene­ral-pur­po­se AI models, having a signi­fi­cant impact on the inter­nal mar­ket due to its reach, and with actu­al or rea­son­ab­ly fore­seeable nega­ti­ve effects on public health, safe­ty, public secu­ri­ty, fun­da­men­tal rights, or the socie­ty as a who­le, that can be pro­pa­ga­ted at sca­le across the value chain; (44e)‘gene­ral pur­po­se AI system’ means an AI system which is based on a gene­ral pur­po­se AI model, that has the capa­bi­li­ty to ser­ve a varie­ty of pur­po­ses, both for direct use as well as for inte­gra­ti­on in other AI systems; (44f)’floa­ting-point ope­ra­ti­on’ means any mathe­ma­ti­cal ope­ra­ti­on or assign­ment invol­ving floa­ting-point num­bers, which are a sub­set of the real num­bers typi­cal­ly repre­sen­ted on com­pu­ters by an inte­ger of fixed pre­cis­i­on sca­led by an inte­ger expo­nent of a fixed base; (44g)‘down­stream pro­vi­der’ means a pro­vi­der of an AI system, inclu­ding a gene­ral- pur­po­se AI system, which inte­gra­tes an AI model, regard­less of whe­ther the model is pro­vi­ded by them­sel­ves and ver­ti­cal­ly inte­gra­ted or pro­vi­ded by ano­ther enti­ty based on con­trac­tu­al relations. 

Artic­le 4b – AI literacy

Pro­vi­ders and deployers of AI systems shall take mea­su­res to ensu­re, to their best ext­ent, a suf­fi­ci­ent level of AI liter­a­cy of their staff and other per­sons deal­ing with the ope­ra­ti­on and use of AI systems on their behalf, taking into account their tech­ni­cal know­ledge, expe­ri­ence, edu­ca­ti­on and trai­ning and the con­text the AI systems are to be used in, and con­side­ring the per­sons or groups of per­sons on which the AI systems are to be used. 

TITLE II PROHIBITED ARTIFICIAL INTELLIGENCE PRACTICES

Artic­le 5 – Pro­hi­bi­ted Arti­fi­ci­al Intel­li­gence Practices

1. The fol­lo­wing arti­fi­ci­al intel­li­gence prac­ti­ces shall be pro­hi­bi­ted: (a) the pla­cing on the mar­ket, put­ting into ser­vice or use of an AI system that deploys sub­li­mi­nal tech­ni­ques bey­ond a person’s con­scious­ness or pur­po­seful­ly mani­pu­la­ti­ve or decep­ti­ve tech­ni­ques, with the objec­ti­ve to or the effect of mate­ri­al­ly dis­tort­ing a person’s or a group of per­sons’ beha­viour by app­re­cia­bly impai­ring the person’s abili­ty to make an infor­med decis­i­on, ther­eby caus­ing the per­son to take a decis­i­on that that per­son would not have other­wi­se taken in a man­ner that cau­ses or is likely to cau­se that per­son, ano­ther per­son or group of per­sons signi­fi­cant harm; (b) the pla­cing on the mar­ket, put­ting into ser­vice or use of an AI system that exploits any of the vul­nerabi­li­ties of a per­son or a spe­ci­fic group of per­sons due to their age, disa­bi­li­ty or a spe­ci­fic social or eco­no­mic situa­ti­on, with the objec­ti­ve to or the effect of mate­ri­al­ly dis­tort­ing the beha­viour of that per­son or a per­son per­tai­ning to that group in a man­ner that cau­ses or is rea­son­ab­ly likely to cau­se that per­son or ano­ther per­son signi­fi­cant harm; (ba) the pla­cing on the mar­ket or put­ting into ser­vice for this spe­ci­fic pur­po­se, or use of bio­me­tric cate­go­ri­sa­ti­on systems that cate­go­ri­se indi­vi­du­al­ly natu­ral per­sons based on their bio­me­tric data to dedu­ce or infer their race, poli­ti­cal opi­ni­ons, trade uni­on mem­ber­ship, reli­gious or phi­lo­so­phi­cal beliefs, sex life or sexu­al ori­en­ta­ti­on. This pro­hi­bi­ti­on does not cover any label­ling or fil­te­ring of lawful­ly acqui­red bio­me­tric data­sets, such as images, based on bio­me­tric data or cate­go­ri­zing of bio­me­tric data in the area of law enforce­ment; (c) the pla­cing on the mar­ket, put­ting into ser­vice or use of AI systems for the eva­lua­ti­on or clas­si­fi­ca­ti­on of natu­ral per­sons or groups the­reof over a cer­tain peri­od of time based on their social beha­viour or known, infer­red or pre­dic­ted per­so­nal or per­so­na­li­ty cha­rac­te­ri­stics, with the social score lea­ding to eit­her or both of the fol­lo­wing: (i) detri­men­tal or unfa­voura­ble tre­at­ment of cer­tain natu­ral per­sons or who­le groups the­reof in social con­texts that are unre­la­ted to the con­texts in which the data was ori­gi­nal­ly gene­ra­ted or coll­ec­ted; (ii) detri­men­tal or unfa­voura­ble tre­at­ment of cer­tain natu­ral per­sons or groups the­reof that is unju­sti­fi­ed or dis­pro­por­tio­na­te to their social beha­viour or its gra­vi­ty; (d) the use of ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on systems in publicly acce­s­si­ble spaces for the pur­po­se of law enforce­ment unless and in as far as such use is strict­ly neces­sa­ry for one of the fol­lo­wing objec­ti­ves: (i) the tar­ge­ted search for spe­ci­fic vic­tims of abduc­tion, traf­ficking in human beings and sexu­al explo­ita­ti­on of human beings as well as search for miss­ing per­sons; (ii) the pre­ven­ti­on of a spe­ci­fic, sub­stan­ti­al and immi­nent thre­at to the life or phy­si­cal safe­ty of natu­ral per­sons or a genui­ne and pre­sent or genui­ne and fore­seeable thre­at of a ter­ro­rist attack; (iii) the loca­li­sa­ti­on or iden­ti­fi­ca­ti­on of a per­son suspec­ted of having com­mit­ted a cri­mi­nal offence, for the pur­po­ses of con­duc­ting a cri­mi­nal inve­sti­ga­ti­on, pro­se­cu­ti­on or exe­cu­ting a cri­mi­nal penal­ty for offen­ces, refer­red to in Annex IIa and punis­ha­ble in the Mem­ber Sta­te con­cer­ned by a cus­to­di­al sen­tence or a detenti­on order for a maxi­mum peri­od of at least four years. This para­graph is wit­hout pre­ju­di­ce to the pro­vi­si­ons in Artic­le 9 of the GDPR for the pro­ce­s­sing of bio­me­tric data for pur­po­ses other than law enforce­ment. (da) the pla­cing on the mar­ket, put­ting into ser­vice for this spe­ci­fic pur­po­se, or use of an AI system for making risk assess­ments of natu­ral per­sons in order to assess or pre­dict the risk of a natu­ral per­son to com­mit a cri­mi­nal offence, based sole­ly on the pro­fil­ing of a natu­ral per­son or on asses­sing their per­so­na­li­ty traits and cha­rac­te­ri­stics. This pro­hi­bi­ti­on shall not app­ly to AI systems used to sup­port the human assess­ment of the invol­vement of a per­son in a cri­mi­nal acti­vi­ty, which is alre­a­dy based on objec­ti­ve and veri­fia­ble facts direct­ly lin­ked to a cri­mi­nal acti­vi­ty; (db) the pla­cing on the mar­ket, put­ting into ser­vice for this spe­ci­fic pur­po­se, or use of AI systems that crea­te or expand facial reco­gni­ti­on data­ba­ses through the unt­ar­ge­ted scra­ping of facial images from the inter­net or CCTV foota­ge; (dc) the pla­cing on the mar­ket, put­ting into ser­vice for this spe­ci­fic pur­po­se, or use of AI systems to infer emo­ti­ons of a natu­ral per­son in the are­as of work­place and edu­ca­ti­on insti­tu­ti­ons except in cases whe­re the use of the AI system is inten­ded to be put in place or into the mar­ket for medi­cal or safe­ty rea­sons. 1a. This Artic­le shall not affect the pro­hi­bi­ti­ons that app­ly whe­re an arti­fi­ci­al intel­li­gence prac­ti­ce inf­rin­ges other Uni­on law. 2. The use of ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on systems in publicly acce­s­si­ble spaces for the pur­po­se of law enforce­ment for any of the objec­ti­ves refer­red to in para­graph 1 point (d) shall only be deployed for the pur­po­ses under para­graph 1, point (d) to con­firm the spe­ci­fi­cal­ly tar­ge­ted individual’s iden­ti­ty and it shall take into account the fol­lo­wing ele­ments: (a) the natu­re of the situa­ti­on giving rise to the pos­si­ble use, in par­ti­cu­lar the serious­ness, pro­ba­bi­li­ty and sca­le of the harm cau­sed in the absence of the use of the system; (b) the con­se­quen­ces of the use of the system for the rights and free­doms of all per­sons con­cer­ned, in par­ti­cu­lar the serious­ness, pro­ba­bi­li­ty and sca­le of tho­se con­se­quen­ces. In addi­ti­on, the use of ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on systems in publicly acce­s­si­ble spaces for the pur­po­se of law enforce­ment for any of the objec­ti­ves refer­red to in para­graph 1 point (d) shall com­ply with neces­sa­ry and pro­por­tio­na­te safe­guards and con­di­ti­ons in rela­ti­on to the use in accordance with natio­nal legis­la­ti­ons aut­ho­ri­zing the use the­reof, in par­ti­cu­lar as regards the tem­po­ral, geo­gra­phic and per­so­nal limi­ta­ti­ons. The use of the ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on system in publicly acce­s­si­ble spaces shall only be aut­ho­ri­sed if the law enforce­ment aut­ho­ri­ty has com­ple­ted a fun­da­men­tal rights impact assess­ment as pro­vi­ded for in Artic­le 29a and has regi­stered the system in the data­ba­se accor­ding to Artic­le 51. Howe­ver, in duly justi­fi­ed cases of urgen­cy, the use of the system may be com­men­ced wit­hout the regi­stra­ti­on, pro­vi­ded that the regi­stra­ti­on is com­ple­ted wit­hout undue delay. 3. As regards para­graphs 1, point (d) and 2, each use for the pur­po­se of law enforce­ment of a ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on system in publicly acce­s­si­ble spaces shall be sub­ject to a pri­or aut­ho­ri­sa­ti­on gran­ted by a judi­cial aut­ho­ri­ty or an inde­pen­dent admi­ni­stra­ti­ve aut­ho­ri­ty who­se decis­i­on is bin­ding of the Mem­ber Sta­te in which the use is to take place, issued upon a rea­so­ned request and in accordance with the detail­ed rules of natio­nal law refer­red to in para­graph 4. Howe­ver, in a duly justi­fi­ed situa­ti­on of urgen­cy, the use of the system may be com­men­ced wit­hout an aut­ho­ri­sa­ti­on pro­vi­ded that such aut­ho­ri­sa­ti­on shall be reque­sted wit­hout undue delay, at the latest within 24 hours. If such aut­ho­ri­sa­ti­on is rejec­ted, its use shall be stop­ped with imme­dia­te effect and all the data, as well as the results and out­puts of this use shall be imme­dia­te­ly dis­card­ed and dele­ted. The com­pe­tent judi­cial aut­ho­ri­ty or an inde­pen­dent admi­ni­stra­ti­ve aut­ho­ri­ty who­se decis­i­on is bin­ding shall only grant the aut­ho­ri­sa­ti­on whe­re it is satis­fied, based on objec­ti­ve evi­dence or clear indi­ca­ti­ons pre­sen­ted to it, that the use of the ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on system at issue is neces­sa­ry for and pro­por­tio­na­te to achie­ving one of the objec­ti­ves spe­ci­fi­ed in para­graph 1, point (d), as iden­ti­fi­ed in the request and, in par­ti­cu­lar, remains limi­t­ed to what is strict­ly neces­sa­ry con­cer­ning the peri­od of time as well as geo­gra­phic and per­so­nal scope. In deci­ding on the request, the com­pe­tent judi­cial aut­ho­ri­ty or an inde­pen­dent admi­ni­stra­ti­ve aut­ho­ri­ty who­se decis­i­on is bin­ding shall take into account the ele­ments refer­red to in para­graph 2. It shall be ensu­red that no decis­i­on that pro­du­ces an adver­se legal effect on a per­son may be taken by the judi­cial aut­ho­ri­ty or an inde­pen­dent admi­ni­stra­ti­ve aut­ho­ri­ty who­se decis­i­on is bin­ding sole­ly based on the out­put of the remo­te bio­me­tric iden­ti­fi­ca­ti­on system. 3a. Wit­hout pre­ju­di­ce to para­graph 3, each use of a ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on system in publicly acce­s­si­ble spaces for law enforce­ment pur­po­ses shall be noti­fi­ed to the rele­vant mar­ket sur­veil­lan­ce aut­ho­ri­ty and the natio­nal data pro­tec­tion aut­ho­ri­ty in accordance with the natio­nal rules refer­red to in para­graph 4. The noti­fi­ca­ti­on shall as a mini­mum con­tain the infor­ma­ti­on spe­ci­fi­ed under para­graph 5 and shall not include sen­si­ti­ve ope­ra­tio­nal data. 4. A Mem­ber Sta­te may deci­de to pro­vi­de for the pos­si­bi­li­ty to ful­ly or par­ti­al­ly aut­ho­ri­se the use of ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on systems in publicly acce­s­si­ble spaces for the pur­po­se of law enforce­ment within the limits and under the con­di­ti­ons listed in para­graphs 1, point (d), 2 and 3. Mem­ber Sta­tes con­cer­ned shall lay down in their natio­nal law the neces­sa­ry detail­ed rules for the request, issu­an­ce and exer­cise of, as well as super­vi­si­on and report­ing rela­ting to, the aut­ho­ri­sa­ti­ons refer­red to in para­graph 3. Tho­se rules shall also spe­ci­fy in respect of which of the objec­ti­ves listed in para­graph 1, point (d), inclu­ding which of the cri­mi­nal offen­ces refer­red to in point (iii) the­reof, the com­pe­tent aut­ho­ri­ties may be aut­ho­ri­sed to use tho­se systems for the pur­po­se of law enforce­ment. Mem­ber Sta­tes shall noti­fy tho­se rules to the Com­mis­si­on at the latest 30 days fol­lo­wing the adop­ti­on the­reof. Mem­ber Sta­tes may intro­du­ce, in accordance with Uni­on law, more rest­ric­ti­ve laws on the use of remo­te bio­me­tric iden­ti­fi­ca­ti­on systems. 5. Natio­nal mar­ket sur­veil­lan­ce aut­ho­ri­ties and the natio­nal data pro­tec­tion aut­ho­ri­ties of Mem­ber Sta­tes that have been noti­fi­ed of the use of ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on systems in publicly acce­s­si­ble spaces for law enforce­ment pur­po­ses pur­su­ant to para­graph 3a shall sub­mit to the Com­mis­si­on annu­al reports on such use. For that pur­po­se, the Com­mis­si­on shall pro­vi­de Mem­ber Sta­tes and natio­nal mar­ket sur­veil­lan­ce and data pro­tec­tion aut­ho­ri­ties with a tem­p­la­te, inclu­ding infor­ma­ti­on on the num­ber of the decis­i­ons taken by com­pe­tent judi­cial aut­ho­ri­ties or an inde­pen­dent admi­ni­stra­ti­ve aut­ho­ri­ty who­se decis­i­on is bin­ding upon requests for aut­ho­ri­sa­ti­ons in accordance with para­graph 3 and their result. 6. The Com­mis­si­on shall publish annu­al reports on the use of ‘real-time’ remo­te bio­me­tric iden­ti­fi­ca­ti­on systems in publicly acce­s­si­ble spaces for law enforce­ment pur­po­ses based on aggre­ga­ted data in Mem­ber Sta­tes based on the annu­al reports refer­red to in para­graph 5, which shall not include sen­si­ti­ve ope­ra­tio­nal data of the rela­ted law enforce­ment activities. 

TITLE III HIGH-RISK AI SYSTEMS

Chap­ter 1 CLASSIFICATION OF AI SYSTEMS AS HIGH-RISK

Artic­le 6 – Clas­si­fi­ca­ti­on rules for high-risk AI systems

1. Irre­spec­ti­ve of whe­ther an AI system is pla­ced on the mar­ket or put into ser­vice inde­pendent­ly from the pro­ducts refer­red to in points (a) and (b), that AI system shall be con­side­red high-risk whe­re both of the fol­lo­wing con­di­ti­ons are ful­fil­led: (a) the AI system is inten­ded to be used as a safe­ty com­po­nent of a pro­duct, or the AI system is its­elf a pro­duct, cover­ed by the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Annex II; (b) the pro­duct who­se safe­ty com­po­nent pur­su­ant to point (a) is the AI system, or the AI system its­elf as a pro­duct, is requi­red to under­go a third-par­ty con­for­mi­ty assess­ment, with a view to the pla­cing on the mar­ket or put­ting into ser­vice of that pro­duct pur­su­ant to the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Annex II. 2. In addi­ti­on to the high-risk AI systems refer­red to in para­graph 1, AI systems refer­red to in Annex III shall also be con­side­red high-risk. 2a. By dero­ga­ti­on from para­graph 2 AI systems shall not be con­side­red as high risk if they do not pose a signi­fi­cant risk of harm, to the health, safe­ty or fun­da­men­tal rights of natu­ral per­sons, inclu­ding by not mate­ri­al­ly influen­cing the out­co­me of decis­i­on making. This shall be the case if one or more of the fol­lo­wing cri­te­ria are ful­fil­led: (a) the AI system is inten­ded to per­form a nar­row pro­ce­du­ral task; (b) the AI system is inten­ded to impro­ve the result of a pre­vious­ly com­ple­ted human acti­vi­ty; (c) the AI system is inten­ded to detect decis­i­on-making pat­terns or devia­ti­ons from pri­or decis­i­on-making pat­terns and is not meant to replace or influence the pre­vious­ly com­ple­ted human assess­ment, wit­hout pro­per human review; or (d) the AI system is inten­ded to per­form a pre­pa­ra­to­ry task to an assess­ment rele­vant for the pur­po­se of the use cases listed in Annex III. Not­wi­th­stan­ding first sub­pa­ra­graph of this para­graph, an AI system shall always be con­side­red high-risk if the AI system per­forms pro­fil­ing of natu­ral per­sons. 2b. A pro­vi­der who con­siders that an AI system refer­red to in Annex III is not high-risk shall docu­ment its assess­ment befo­re that system is pla­ced on the mar­ket or put into ser­vice. Such pro­vi­der shall be sub­ject to the regi­stra­ti­on obli­ga­ti­on set out in Artic­le 51(1a). Upon request of natio­nal com­pe­tent aut­ho­ri­ties, the pro­vi­der shall pro­vi­de the docu­men­ta­ti­on of the assess­ment. 2c. The Com­mis­si­on shall, after con­sul­ting the AI Board, and no later than 18 months after the ent­ry into force of this Regu­la­ti­on, pro­vi­de gui­de­lines spe­ci­fy­ing the prac­ti­cal imple­men­ta­ti­on of this artic­le com­ple­ted by a com­pre­hen­si­ve list of prac­ti­cal examp­les of high risk and non-high risk use cases on AI systems in accordance with the con­di­ti­ons set out in Artic­le 82a. 2d. The Com­mis­si­on is empowered to adopt dele­ga­ted acts in accordance with Artic­le 73 to amend the cri­te­ria laid down in points (a) to (d) of the first sub­pa­ra­graph of para­graph 2a. The Com­mis­si­on may adopt dele­ga­ted acts adding new cri­te­ria to tho­se laid down in points (a) to (d) of the first sub­pa­ra­graph of para­graph 2a, or modi­fy­ing them, only whe­re the­re is con­cre­te and relia­ble evi­dence of the exi­stence of AI systems that fall under the scope of Annex III but that do not pose a signi­fi­cant risk of harm to the health, safe­ty and fun­da­men­tal rights. The Com­mis­si­on shall adopt dele­ga­ted acts dele­ting any of the cri­te­ria laid down in the first sub­pa­ra­graph of para­graph 2a whe­re the­re is con­cre­te and relia­ble evi­dence that this is neces­sa­ry for the pur­po­se of main­tai­ning the level of pro­tec­tion of health, safe­ty and fun­da­men­tal rights in the Uni­on. Any amend­ment to the cri­te­ria laid down in points (a) to (d) set out in the first sub­pa­ra­graph of para­graph 2a shall not decrea­se the over­all level of pro­tec­tion of health, safe­ty and fun­da­men­tal rights in the Uni­on. When adop­ting the dele­ga­ted acts, the Com­mis­si­on shall ensu­re con­si­sten­cy with the dele­ga­ted acts adopted pur­su­ant to Artic­le 7(1) and shall take account of mar­ket and tech­no­lo­gi­cal developments. 

Artic­le 7 – Amend­ments to Annex III

1. The Com­mis­si­on is empowered to adopt dele­ga­ted acts in accordance with Artic­le 73 to amend Annex III by adding or modi­fy­ing use cases of high-risk AI systems whe­re both of the fol­lo­wing con­di­ti­ons are ful­fil­led: (a) the AI systems are inten­ded to be used in any of the are­as listed in points 1 to 8 of Annex III; (b) the AI systems pose a risk of harm to health and safe­ty, or an adver­se impact on fun­da­men­tal rights, and that risk is equi­va­lent to or grea­ter than the risk of harm or of adver­se impact posed by the high-risk AI systems alre­a­dy refer­red to in Annex III. 2. When asses­sing for the pur­po­ses of para­graph 1 whe­ther an AI system poses a risk of harm to the health and safe­ty or a risk of adver­se impact on fun­da­men­tal rights that is equi­va­lent to or grea­ter than the risk of harm posed by the high-risk AI systems alre­a­dy refer­red to in Annex III, the Com­mis­si­on shall take into account the fol­lo­wing cri­te­ria: (a) the inten­ded pur­po­se of the AI system; (b) the ext­ent to which an AI system has been used or is likely to be used; (ba) the natu­re and amount of the data pro­ce­s­sed and used by the AI system, in par­ti­cu­lar whe­ther spe­cial cate­go­ries of per­so­nal data are pro­ce­s­sed; (bb) the ext­ent to which the AI system acts auto­no­mously and the pos­si­bi­li­ty for a human to over­ri­de a decis­i­on or recom­men­da­ti­ons that may lead to poten­ti­al harm; (c) the ext­ent to which the use of an AI system has alre­a­dy cau­sed harm to health and safe­ty, has had an adver­se impact on fun­da­men­tal rights or has given rise to signi­fi­cant con­cerns in rela­ti­on to the likeli­hood of such harm or adver­se impact, as demon­stra­ted for exam­p­le by reports or docu­men­ted alle­ga­ti­ons sub­mit­ted to natio­nal com­pe­tent aut­ho­ri­ties or by other reports, as appro­pria­te; (d) the poten­ti­al ext­ent of such harm or such adver­se impact, in par­ti­cu­lar in terms of its inten­si­ty and its abili­ty to affect a plu­ra­li­ty of per­sons or to dis­pro­por­tio­na­te­ly affect a par­ti­cu­lar group of per­sons; (e) the ext­ent to which poten­ti­al­ly har­med or adver­se­ly impac­ted per­sons are depen­dent on the out­co­me pro­du­ced with an AI system, in par­ti­cu­lar becau­se for prac­ti­cal or legal rea­sons it is not rea­son­ab­ly pos­si­ble to opt-out from that out­co­me; (f) the ext­ent to which the­re is an imba­lan­ce of power, or the poten­ti­al­ly har­med or adver­se­ly impac­ted per­sons are in a vul­nerable posi­ti­on in rela­ti­on to the user of an AI system, in par­ti­cu­lar due to sta­tus, aut­ho­ri­ty, know­ledge, eco­no­mic or social cir­cum­stances, or age; (g) the ext­ent to which the out­co­me pro­du­ced invol­ving an AI system is easi­ly cor­ri­gi­ble or rever­si­ble, taking into account the tech­ni­cal solu­ti­ons available to cor­rect or rever­se, wher­eby out­co­mes having and adver­se impact on health, safe­ty, fun­da­men­tal rights, shall not be con­side­red as easi­ly cor­ri­gi­ble or rever­si­ble; (gb) the magnitu­de and likeli­hood of bene­fit of the deployment of the AI system for indi­vi­du­als, groups, or socie­ty at lar­ge, inclu­ding pos­si­ble impro­ve­ments in pro­duct safe­ty; (h) the ext­ent to which exi­sting Uni­on legis­la­ti­on pro­vi­des for: (i) effec­ti­ve mea­su­res of redress in rela­ti­on to the risks posed by an AI system, with the exclu­si­on of claims for dama­ges; (ii) effec­ti­ve mea­su­res to pre­vent or sub­stan­ti­al­ly mini­mi­se tho­se risks. 2a. The Com­mis­si­on is empowered to adopt dele­ga­ted acts in accordance with Artic­le 73 to amend the list in Annex III by remo­ving high-risk AI systems whe­re both of the fol­lo­wing con­di­ti­ons are ful­fil­led: (a) the high-risk AI system(s) con­cer­ned no lon­ger pose any signi­fi­cant risks to fun­da­men­tal rights, health or safe­ty, taking into account the cri­te­ria listed in para­graph 2; (b) the dele­ti­on does not decrea­se the over­all level of pro­tec­tion of health, safe­ty and fun­da­men­tal rights under Uni­on law. 

Chap­ter 2 REQUIREMENTS FOR HIGH-RISK AI SYSTEMS

Artic­le 8 – Com­pli­ance with the requirements

1. High-risk AI systems shall com­ply with the requi­re­ments estab­lished in this Chap­ter, taking into account its inten­ded pur­po­se as well as the gene­ral­ly ack­now­led­ged sta­te of the art on AI and AI rela­ted tech­no­lo­gies. The risk manage­ment system refer­red to in Artic­le 9 shall be taken into account when ensu­ring com­pli­ance with tho­se requi­re­ments. 2a. Whe­re a pro­duct con­ta­ins an arti­fi­ci­al intel­li­gence system, to which the requi­re­ments of this Regu­la­ti­on as well as requi­re­ments of the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Annex II, Sec­tion A app­ly, pro­vi­ders shall be respon­si­ble for ensu­ring that their pro­duct is ful­ly com­pli­ant with all appli­ca­ble requi­re­ments requi­red under the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on. In ensu­ring the com­pli­ance of high-risk AI systems refer­red in para­graph 1 with the requi­re­ments set out in Chap­ter 2 of this Tit­le, and in order to ensu­re con­si­sten­cy, avo­id dupli­ca­ti­ons and mini­mi­se addi­tio­nal bur­dens, pro­vi­ders shall have a choice to inte­gra­te, as appro­pria­te, the neces­sa­ry test­ing and report­ing pro­ce­s­ses, infor­ma­ti­on and docu­men­ta­ti­on they pro­vi­de with regard to their pro­duct into alre­a­dy exi­sting docu­men­ta­ti­on and pro­ce­du­res requi­red under the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Annex II, Sec­tion A. 

Artic­le 9 – Risk manage­ment system

1. A risk manage­ment system shall be estab­lished, imple­men­ted, docu­men­ted and main­tai­ned in rela­ti­on to high-risk AI systems. 2. The risk manage­ment system shall be under­s­tood as a con­ti­nuous ite­ra­ti­ve pro­cess plan­ned and run throug­hout the enti­re life­cy­cle of a high-risk AI system, requi­ring regu­lar syste­ma­tic review and updating. It shall com­pri­se the fol­lo­wing steps: (a) iden­ti­fi­ca­ti­on and ana­ly­sis of the known and the rea­son­ab­ly fore­seeable risks that the high-risk AI system can pose to the health, safe­ty or fun­da­men­tal rights when the high-risk AI system is used in accordance with its inten­ded pur­po­se; (b) esti­ma­ti­on and eva­lua­ti­on of the risks that may emer­ge when the high-risk AI system is used in accordance with its inten­ded pur­po­se and under con­di­ti­ons of rea­son­ab­ly fore­seeable misu­se; (c) eva­lua­ti­on of other pos­si­bly ari­sing risks based on the ana­ly­sis of data gathe­red from the post-mar­ket moni­to­ring system refer­red to in Artic­le 61; (d) adop­ti­on of appro­pria­te and tar­ge­ted risk manage­ment mea­su­res desi­gned to address the risks iden­ti­fi­ed pur­su­ant to point a of this para­graph in accordance with the pro­vi­si­ons of the fol­lo­wing para­graphs. 2a. The risks refer­red to in this para­graph shall con­cern only tho­se which may be rea­son­ab­ly miti­ga­ted or eli­mi­na­ted through the deve­lo­p­ment or design of the high-risk AI system, or the pro­vi­si­on of ade­qua­te tech­ni­cal infor­ma­ti­on. 3. The risk manage­ment mea­su­res refer­red to in para­graph 2, point (d) shall give due con­side­ra­ti­on to the effects and pos­si­ble inter­ac­tion resul­ting from the com­bi­ned appli­ca­ti­on of the requi­re­ments set out in this Chap­ter 2, with a view to mini­mi­sing risks more effec­tively while achie­ving an appro­pria­te balan­ce in imple­men­ting the mea­su­res to ful­fil tho­se requi­re­ments. 4. The risk manage­ment mea­su­res refer­red to in para­graph 2, point (d) shall be such that rele­vant resi­du­al risk asso­cia­ted with each hazard as well as the over­all resi­du­al risk of the high-risk AI systems is jud­ged to be accep­ta­ble. In iden­ti­fy­ing the most appro­pria­te risk manage­ment mea­su­res, the fol­lo­wing shall be ensu­red: (a) eli­mi­na­ti­on or reduc­tion of iden­ti­fi­ed risks and eva­lua­ted pur­su­ant to para­graph 2 as far as tech­ni­cal­ly fea­si­ble through ade­qua­te design and deve­lo­p­ment of the high-risk AI system; (b) whe­re appro­pria­te, imple­men­ta­ti­on of ade­qua­te miti­ga­ti­on and con­trol mea­su­res addres­sing risks that can­not be eli­mi­na­ted; (c) pro­vi­si­on of the requi­red infor­ma­ti­on pur­su­ant to Artic­le 13, refer­red to in para­graph 2, point (b) of this Artic­le, and, whe­re appro­pria­te, trai­ning to deployers. With a view to eli­mi­na­ting or redu­cing risks rela­ted to the use of the high-risk AI system, due con­side­ra­ti­on shall be given to the tech­ni­cal know­ledge, expe­ri­ence, edu­ca­ti­on, trai­ning to be expec­ted by the deployer and the pre­su­ma­ble con­text in which the system is inten­ded to be used. 5. High-risk AI systems shall be tested for the pur­po­ses of iden­ti­fy­ing the most appro­pria­te and tar­ge­ted risk manage­ment mea­su­res. Test­ing shall ensu­re that high-risk AI systems per­form con­sist­ent­ly for their inten­ded pur­po­se and they are in com­pli­ance with the requi­re­ments set out in this Chap­ter. 6. Test­ing pro­ce­du­res may include test­ing in real world con­di­ti­ons in accordance with Artic­le 54a. 7. The test­ing of the high-risk AI systems shall be per­for­med, as appro­pria­te, at any point in time throug­hout the deve­lo­p­ment pro­cess, and, in any event, pri­or to the pla­cing on the mar­ket or the put­ting into ser­vice. Test­ing shall be made against pri­or defi­ned metrics and pro­ba­bi­li­stic thres­holds that are appro­pria­te to the inten­ded pur­po­se of the high-risk AI system. 8. When imple­men­ting the risk manage­ment system descri­bed in para­graphs 1 to 6, pro­vi­ders shall give con­side­ra­ti­on to whe­ther in view of its inten­ded pur­po­se the high-risk AI system is likely to adver­se­ly impact per­sons under the age of 18 and, as appro­pria­te, other vul­nerable groups of peo­p­le. 9. For pro­vi­ders of high-risk AI systems that are sub­ject to requi­re­ments regar­ding inter­nal risk manage­ment pro­ce­s­ses under rele­vant sec­to­ri­al Uni­on law, the aspects descri­bed in para­graphs 1 to 8 may be part of or com­bi­ned with the risk manage­ment pro­ce­du­res estab­lished pur­su­ant to that law. 

Artic­le 10 – Data and data governance

1. High-risk AI systems which make use of tech­ni­ques invol­ving the trai­ning of models with data shall be deve­lo­ped on the basis of trai­ning, vali­da­ti­on and test­ing data sets that meet the qua­li­ty cri­te­ria refer­red to in para­graphs 2 to 5 when­ever such data­sets are used. 2. Trai­ning, vali­da­ti­on and test­ing data sets shall be sub­ject to appro­pria­te data gover­nan­ce and manage­ment prac­ti­ces appro­pria­te for the inten­ded pur­po­se of the AI system. Tho­se prac­ti­ces shall con­cern in par­ti­cu­lar: (a) the rele­vant design choices; (aa) data coll­ec­tion pro­ce­s­ses and ori­gin of data, and in the case of per­so­nal data, the ori­gi­nal pur­po­se of data coll­ec­tion; (c) rele­vant data pre­pa­ra­ti­on pro­ce­s­sing ope­ra­ti­ons, such as anno­ta­ti­on, label­ling, clea­ning, updating, enrich­ment and aggre­ga­ti­on; (d) the for­mu­la­ti­on of assump­ti­ons, nota­b­ly with respect to the infor­ma­ti­on that the data are sup­po­sed to mea­su­re and repre­sent; (e) an assess­ment of the avai­la­bi­li­ty, quan­ti­ty and sui­ta­bi­li­ty of the data sets that are nee­ded; (f) exami­na­ti­on in view of pos­si­ble bia­ses that are likely to affect the health and safe­ty of per­sons, nega­tively impact fun­da­men­tal rights or lead to dis­cri­mi­na­ti­on pro­hi­bi­ted under Uni­on law, espe­ci­al­ly whe­re data out­puts influence inputs for future ope­ra­ti­ons; (fa) appro­pria­te mea­su­res to detect, pre­vent and miti­ga­te pos­si­ble bia­ses iden­ti­fi­ed accor­ding to point (f); (g) the iden­ti­fi­ca­ti­on of rele­vant data gaps or short­co­mings that pre­vent com­pli­ance with this Regu­la­ti­on, and how tho­se gaps and short­co­mings can be addres­sed. 3. Trai­ning, vali­da­ti­on and test­ing data­sets shall be rele­vant, suf­fi­ci­ent­ly repre­sen­ta­ti­ve, and to the best ext­ent pos­si­ble, free of errors and com­ple­te in view of the inten­ded pur­po­se. They shall have the appro­pria­te sta­tis­ti­cal pro­per­ties, inclu­ding, whe­re appli­ca­ble, as regards the per­sons or groups of per­sons in rela­ti­on to whom the high-risk AI system is inten­ded to be used. The­se cha­rac­te­ri­stics of the data sets may be met at the level of indi­vi­du­al data sets or a com­bi­na­ti­on the­reof. 4. Data­sets shall take into account, to the ext­ent requi­red by the inten­ded pur­po­se, the cha­rac­te­ri­stics or ele­ments that are par­ti­cu­lar to the spe­ci­fic geo­gra­phi­cal, con­tex­tu­al, beha­viou­ral or func­tion­al set­ting within which the high-risk AI system is inten­ded to be used. 5. To the ext­ent that it is strict­ly neces­sa­ry for the pur­po­ses of ensu­ring bias detec­tion and cor­rec­tion in rela­ti­on to the high-risk AI systems in accordance with the second para­graph, point f and fa, the pro­vi­ders of such systems may excep­tio­nal­ly pro­cess spe­cial cate­go­ries of per­so­nal data refer­red to in Artic­le 9(1) of Regu­la­ti­on (EU) 2016/679, Artic­le 10 of Direc­ti­ve (EU) 2016/680 and Artic­le 10(1) of Regu­la­ti­on (EU) 2018/1725, sub­ject to appro­pria­te safe­guards for the fun­da­men­tal rights and free­doms of natu­ral per­sons. In addi­ti­on to pro­vi­si­ons set out in the Regu­la­ti­on (EU) 2016/679, Direc­ti­ve (EU) 2016/680 and Regu­la­ti­on (EU) 2018/1725, all the fol­lo­wing con­di­ti­ons shall app­ly in order for such pro­ce­s­sing to occur: (a) the bias detec­tion and cor­rec­tion can­not be effec­tively ful­fil­led by pro­ce­s­sing other data, inclu­ding syn­the­tic or anony­mi­sed data; (b) the spe­cial cate­go­ries of per­so­nal data pro­ce­s­sed for the pur­po­se of this para­graph are sub­ject to tech­ni­cal limi­ta­ti­ons on the re-use of the per­so­nal data and sta­te of the art secu­ri­ty and pri­va­cy-pre­ser­ving mea­su­res, inclu­ding pseud­ony­mi­sa­ti­on; (c) the spe­cial cate­go­ries of per­so­nal data pro­ce­s­sed for the pur­po­se of this para­graph are sub­ject to mea­su­res to ensu­re that the per­so­nal data pro­ce­s­sed are secu­red, pro­tec­ted, sub­ject to sui­ta­ble safe­guards, inclu­ding strict con­trols and docu­men­ta­ti­on of the access, to avo­id misu­se and ensu­re only aut­ho­ri­sed per­sons have access to tho­se per­so­nal data with appro­pria­te con­fi­den­tia­li­ty obli­ga­ti­ons; (d) the spe­cial cate­go­ries of per­so­nal data pro­ce­s­sed for the pur­po­se of this para­graph are not to be trans­mit­ted, trans­fer­red or other­wi­se acce­s­sed by other par­ties; (e) the spe­cial cate­go­ries of per­so­nal data pro­ce­s­sed for the pur­po­se of this para­graph are dele­ted once the bias has been cor­rec­ted or the per­so­nal data has rea­ched the end of its reten­ti­on peri­od, wha­te­ver comes first; (f) the records of pro­ce­s­sing acti­vi­ties pur­su­ant to Regu­la­ti­on (EU) 2016/679, Direc­ti­ve (EU) 2016/680 and Regu­la­ti­on (EU) 2018/1725 inclu­des justi­fi­ca­ti­on why the pro­ce­s­sing of spe­cial cate­go­ries of per­so­nal data was strict­ly neces­sa­ry to detect and cor­rect bia­ses and this objec­ti­ve could not be achie­ved by pro­ce­s­sing other data. 6. For the deve­lo­p­ment of high-risk AI systems not using tech­ni­ques invol­ving the trai­ning of models, para­graphs 2 to 5 shall app­ly only to the test­ing data sets. 

Artic­le 11 – Tech­ni­cal documentation

1. The tech­ni­cal docu­men­ta­ti­on of a high-risk AI system shall be drawn up befo­re that system is pla­ced on the mar­ket or put into ser­vice and shall be kept up-to date. The tech­ni­cal docu­men­ta­ti­on shall be drawn up in such a way to demon­stra­te that the high- risk AI system com­plies with the requi­re­ments set out in this Chap­ter and pro­vi­de natio­nal com­pe­tent aut­ho­ri­ties and noti­fi­ed bodies with the neces­sa­ry infor­ma­ti­on in a clear and com­pre­hen­si­ve form to assess the com­pli­ance of the AI system with tho­se requi­re­ments. It shall con­tain, at a mini­mum, the ele­ments set out in Annex IV. SMEs, inclu­ding start-ups, may pro­vi­de the ele­ments of the tech­ni­cal docu­men­ta­ti­on spe­ci­fi­ed in Annex IV in a sim­pli­fi­ed man­ner. For this pur­po­se, the Com­mis­si­on shall estab­lish a sim­pli­fi­ed tech­ni­cal docu­men­ta­ti­on form tar­ge­ted at the needs of small and micro enter­pri­ses. Whe­re an SME, inclu­ding start-ups, opts to pro­vi­de the infor­ma­ti­on requi­red in Annex IV in a sim­pli­fi­ed man­ner, it shall use the form refer­red to in this para­graph. Noti­fi­ed bodies shall accept the form for the pur­po­se of con­for­mi­ty assess­ment. 2. Whe­re a high-risk AI system rela­ted to a pro­duct, to which the legal acts listed in Annex II, sec­tion A app­ly, is pla­ced on the mar­ket or put into ser­vice one sin­gle tech­ni­cal docu­men­ta­ti­on shall be drawn up con­tai­ning all the infor­ma­ti­on set out in para­graph 1 as well as the infor­ma­ti­on requi­red under tho­se legal acts. 3. The Com­mis­si­on is empowered to adopt dele­ga­ted acts in accordance with Artic­le 73 to amend Annex IV whe­re neces­sa­ry to ensu­re that, in the light of tech­ni­cal pro­gress, the tech­ni­cal docu­men­ta­ti­on pro­vi­des all the neces­sa­ry infor­ma­ti­on to assess the com­pli­ance of the system with the requi­re­ments set out in this Chapter. 

Artic­le 12 – Record-keeping

1. High-risk AI systems shall tech­ni­cal­ly allow for the auto­ma­tic recor­ding of events (‘logs’) over the dura­ti­on of the life­time of the system. 2. In order to ensu­re a level of tracea­bi­li­ty of the AI system’s func­tio­ning that is appro­pria­te to the inten­ded pur­po­se of the system, log­ging capa­bi­li­ties shall enable the recor­ding of events rele­vant for: (i) iden­ti­fi­ca­ti­on of situa­tions that may result in the AI system pre­sen­ting a risk within the mea­ning of Artic­le 65(1) or in a sub­stan­ti­al modi­fi­ca­ti­on; (ii) faci­li­ta­ti­on of the post-mar­ket moni­to­ring refer­red to in Artic­le 61; and (iii) moni­to­ring of the ope­ra­ti­on of high-risk AI systems refer­red to in Artic­le 29(4). 4. For high-risk AI systems refer­red to in para­graph 1, point (a) of Annex III, the log­ging capa­bi­li­ties shall pro­vi­de, at a mini­mum: (a) recor­ding of the peri­od of each use of the system (start date and time and end date and time of each use); (b) the refe­rence data­ba­se against which input data has been checked by the system; (c) the input data for which the search has led to a match; (d) the iden­ti­fi­ca­ti­on of the natu­ral per­sons invol­ved in the veri­fi­ca­ti­on of the results, as refer­red to in Artic­le 14 (5).

Artic­le 13 – Trans­pa­ren­cy and pro­vi­si­on of infor­ma­ti­on to deployers

1. High-risk AI systems shall be desi­gned and deve­lo­ped in such a way to ensu­re that their ope­ra­ti­on is suf­fi­ci­ent­ly trans­pa­rent to enable deployers to inter­pret the system’s out­put and use it appro­pria­te­ly. An appro­pria­te type and degree of trans­pa­ren­cy shall be ensu­red with a view to achie­ving com­pli­ance with the rele­vant obli­ga­ti­ons of the pro­vi­der and deployer set out in Chap­ter 3 of this Tit­le. 2. High-risk AI systems shall be accom­pa­nied by ins­truc­tions for use in an appro­pria­te digi­tal for­mat or other­wi­se that include con­cise, com­ple­te, cor­rect and clear infor­ma­ti­on that is rele­vant, acce­s­si­ble and com­pre­hen­si­ble to users. 3. The ins­truc­tions for use shall con­tain at least the fol­lo­wing infor­ma­ti­on: (a) the iden­ti­ty and the cont­act details of the pro­vi­der and, whe­re appli­ca­ble, of its aut­ho­ri­sed repre­sen­ta­ti­ve; (b) the cha­rac­te­ri­stics, capa­bi­li­ties and limi­ta­ti­ons of per­for­mance of the high-risk AI system, inclu­ding: (i) its inten­ded pur­po­se; (ii) the level of accu­ra­cy, inclu­ding its metrics, robust­ness and cyber­se­cu­ri­ty refer­red to in Artic­le 15 against which the high-risk AI system has been tested and vali­da­ted and which can be expec­ted, and any known and fore­seeable cir­cum­stances that may have an impact on that expec­ted level of accu­ra­cy, robust­ness and cyber­se­cu­ri­ty; (iii) any known or fore­seeable cir­cum­stance, rela­ted to the use of the high-risk AI system in accordance with its inten­ded pur­po­se or under con­di­ti­ons of rea­son­ab­ly fore­seeable misu­se, which may lead to risks to the health and safe­ty or fun­da­men­tal rights refer­red to in Artic­le 9(2); (iiia) whe­re appli­ca­ble, the tech­ni­cal capa­bi­li­ties and cha­rac­te­ri­stics of the AI system to pro­vi­de infor­ma­ti­on that is rele­vant to explain its out­put; (iv) when appro­pria­te, its per­for­mance regar­ding spe­ci­fic per­sons or groups of per­sons on which the system is inten­ded to be used; (v) when appro­pria­te, spe­ci­fi­ca­ti­ons for the input data, or any other rele­vant infor­ma­ti­on in terms of the trai­ning, vali­da­ti­on and test­ing data sets used, taking into account the inten­ded pur­po­se of the AI system; (va) whe­re appli­ca­ble, infor­ma­ti­on to enable deployers to inter­pret the system’s out­put and use it appro­pria­te­ly. (c) the chan­ges to the high-risk AI system and its per­for­mance which have been pre- deter­mi­ned by the pro­vi­der at the moment of the initi­al con­for­mi­ty assess­ment, if any; (d) the human over­sight mea­su­res refer­red to in Artic­le 14, inclu­ding the tech­ni­cal mea­su­res put in place to faci­li­ta­te the inter­pre­ta­ti­on of the out­puts of AI systems by the deployers; (e) the com­pu­ta­tio­nal and hard­ware resour­ces nee­ded, the expec­ted life­time of the high- risk AI system and any neces­sa­ry main­ten­an­ce and care mea­su­res, inclu­ding their fre­quen­cy, to ensu­re the pro­per func­tio­ning of that AI system, inclu­ding as regards soft­ware updates; (ea) whe­re rele­vant, a descrip­ti­on of the mecha­nisms inclu­ded within the AI system that allo­ws users to pro­per­ly coll­ect, store and inter­pret the logs in accordance with Artic­le 12. 

Artic­le 14 – Human oversight

1. High-risk AI systems shall be desi­gned and deve­lo­ped in such a way, inclu­ding with appro­pria­te human-machi­ne inter­face tools, that they can be effec­tively over­seen by natu­ral per­sons during the peri­od in which the AI system is in use. 2. Human over­sight shall aim at pre­ven­ting or mini­mi­sing the risks to health, safe­ty or fun­da­men­tal rights that may emer­ge when a high-risk AI system is used in accordance with its inten­ded pur­po­se or under con­di­ti­ons of rea­son­ab­ly fore­seeable misu­se, in par­ti­cu­lar when such risks per­sist not­wi­th­stan­ding the appli­ca­ti­on of other requi­re­ments set out in this Chap­ter. 3. The over­sight mea­su­res shall be com­men­su­ra­te to the risks, level of auto­no­my and con­text of use of the AI system and shall be ensu­red through eit­her one or all of the fol­lo­wing types of mea­su­res: (a) mea­su­res iden­ti­fi­ed and built, when tech­ni­cal­ly fea­si­ble, into the high-risk AI system by the pro­vi­der befo­re it is pla­ced on the mar­ket or put into ser­vice; (b) mea­su­res iden­ti­fi­ed by the pro­vi­der befo­re pla­cing the high-risk AI system on the mar­ket or put­ting it into ser­vice and that are appro­pria­te to be imple­men­ted by the user. 4. For the pur­po­se of imple­men­ting para­graphs 1 to 3, the high-risk AI system shall be pro­vi­ded to the user in such a way that natu­ral per­sons to whom human over­sight is assi­gned are enab­led, as appro­pria­te and pro­por­tio­na­te to the cir­cum­stances: (a) to pro­per­ly under­stand the rele­vant capa­ci­ties and limi­ta­ti­ons of the high-risk AI system and be able to duly moni­tor its ope­ra­ti­on, also in view of detec­ting and addres­sing anoma­lies, dys­func­tions and unex­pec­ted per­for­mance; (b) to remain awa­re of the pos­si­ble ten­den­cy of auto­ma­ti­cal­ly rely­ing or over-rely­ing on the out­put pro­du­ced by a high-risk AI system (‘auto­ma­ti­on bias’), in par­ti­cu­lar for high-risk AI systems used to pro­vi­de infor­ma­ti­on or recom­men­da­ti­ons for decis­i­ons to be taken by natu­ral per­sons; (c) to cor­rect­ly inter­pret the high-risk AI system’s out­put, taking into account for exam­p­le the inter­pre­ta­ti­on tools and methods available; (d) to deci­de, in any par­ti­cu­lar situa­ti­on, not to use the high-risk AI system or other­wi­se dis­re­gard, over­ri­de or rever­se the out­put of the high-risk AI system; (e) to inter­ve­ne on the ope­ra­ti­on of the high-risk AI system or inter­rupt, the system through a “stop” but­ton or a simi­lar pro­ce­du­re that allo­ws the system to come to a halt in a safe sta­te. 5. For high-risk AI systems refer­red to in point 1(a) of Annex III, the mea­su­res refer­red to in para­graph 3 shall be such as to ensu­re that, in addi­ti­on, no action or decis­i­on is taken by the deployer on the basis of the iden­ti­fi­ca­ti­on resul­ting from the system unless this has been sepa­ra­te­ly veri­fi­ed and con­firm­ed by at least two natu­ral per­sons with the neces­sa­ry com­pe­tence, trai­ning and aut­ho­ri­ty. The requi­re­ment for a sepa­ra­te veri­fi­ca­ti­on by at least two natu­ral per­sons shall not app­ly to high risk AI systems used for the pur­po­se of law enforce­ment, migra­ti­on, bor­der con­trol or asyl­um, in cases whe­re Uni­on or natio­nal law con­siders the appli­ca­ti­on of this requi­re­ment to be disproportionate. 

Artic­le 15 – Accu­ra­cy, robust­ness and cybersecurity

1. High-risk AI systems shall be desi­gned and deve­lo­ped in such a way that they achie­ve an appro­pria­te level of accu­ra­cy, robust­ness, and cyber­se­cu­ri­ty, and per­form con­sist­ent­ly in tho­se respects throug­hout their life­cy­cle. 1a. To address the tech­ni­cal aspects of how to mea­su­re the appro­pria­te levels of accu­ra­cy and robust­ness set out in para­graph 1 of this Artic­le and any other rele­vant per­for­mance metrics, the Com­mis­si­on shall, in coope­ra­ti­on with rele­vant stake­hol­der and orga­ni­sa­ti­ons such as metro­lo­gy and bench­mar­king aut­ho­ri­ties, encou­ra­ge as appro­pria­te, the deve­lo­p­ment of bench­marks and mea­su­re­ment metho­do­lo­gies. 2. The levels of accu­ra­cy and the rele­vant accu­ra­cy metrics of high-risk AI systems shall be declared in the accom­pany­ing ins­truc­tions of use. 3. High-risk AI systems shall be as resi­li­ent as pos­si­ble regar­ding errors, faults or incon­si­sten­ci­es that may occur within the system or the envi­ron­ment in which the system ope­ra­tes, in par­ti­cu­lar due to their inter­ac­tion with natu­ral per­sons or other systems. Tech­ni­cal and orga­ni­sa­tio­nal mea­su­res shall be taken towards this regard. The robust­ness of high-risk AI systems may be achie­ved through tech­ni­cal red­un­dan­cy solu­ti­ons, which may include back­up or fail-safe plans. High-risk AI systems that con­ti­n­ue to learn after being pla­ced on the mar­ket or put into ser­vice shall be deve­lo­ped in such a way to eli­mi­na­te or redu­ce as far as pos­si­ble the risk of pos­si­bly bia­sed out­puts influen­cing input for future ope­ra­ti­ons (‘feed­back loops’) are duly addres­sed with appro­pria­te miti­ga­ti­on mea­su­res. 4. High-risk AI systems shall be resi­li­ent as regards to attempts by unaut­ho­ri­sed third par­ties to alter their use, out­puts or per­for­mance by exploi­ting the system vul­nerabi­li­ties. The tech­ni­cal solu­ti­ons aimed at ensu­ring the cyber­se­cu­ri­ty of high-risk AI systems shall be appro­pria­te to the rele­vant cir­cum­stances and the risks. The tech­ni­cal solu­ti­ons to address AI spe­ci­fic vul­nerabi­li­ties shall include, whe­re appro­pria­te, mea­su­res to pre­vent, detect, respond to, resol­ve and con­trol for attacks try­ing to mani­pu­la­te the trai­ning data­set (‘data poi­so­ning’), or pre-trai­ned com­pon­ents used in trai­ning (‘model poi­so­ning’), inputs desi­gned to cau­se the model to make a mista­ke (‘adver­sa­ri­al examp­les’ or ‘model eva­si­on’), con­fi­den­tia­li­ty attacks or model flaws. 

Chap­ter 3 OBLIGATIONS OF PROVIDERS AND DEPLOYERS OF HIGH- RISK AI SYSTEMS AND OTHER PARTIES

Artic­le 16 – Obli­ga­ti­ons of pro­vi­ders of high-risk AI systems

Pro­vi­ders of high-risk AI systems shall: (a) ensu­re that their high-risk AI systems are com­pli­ant with the requi­re­ments set out in Chap­ter 2 of this Tit­le; (aa) indi­ca­te their name, regi­stered trade name or regi­stered trade mark, the address at which they can be cont­ac­ted on the high-risk AI system or, whe­re that is not pos­si­ble, on its pack­a­ging or its accom­pany­ing docu­men­ta­ti­on, as appli­ca­ble; (b) have a qua­li­ty manage­ment system in place which com­plies with Artic­le 17; (c) keep the docu­men­ta­ti­on refer­red to in Artic­le 18; (d) when under their con­trol, keep the logs auto­ma­ti­cal­ly gene­ra­ted by their high-risk AI systems as refer­red to in Artic­le 20; (e) ensu­re that the high-risk AI system under­goes the rele­vant con­for­mi­ty assess­ment pro­ce­du­re as refer­red to in Artic­le 43, pri­or to its pla­cing on the mar­ket or put­ting into ser­vice; (ea) draw up an EU decla­ra­ti­on of con­for­mi­ty in accordance with Artic­le 48; (eb) affix the CE mar­king to the high-risk AI system to indi­ca­te con­for­mi­ty with this Regu­la­ti­on, in accordance with Artic­le 49; (f) com­ply with the regi­stra­ti­on obli­ga­ti­ons refer­red to in Artic­le 51(1); (g) take the neces­sa­ry cor­rec­ti­ve actions and pro­vi­de infor­ma­ti­on as requi­red in Artic­le 21; (j) upon a rea­so­ned request of a natio­nal com­pe­tent aut­ho­ri­ty, demon­stra­te the con­for­mi­ty of the high-risk AI system with the requi­re­ments set out in Chap­ter 2 of this Tit­le; (ja) ensu­re that the high-risk AI system com­plies with acce­s­si­bi­li­ty requi­re­ments, in accordance with Direc­ti­ve 2019/882 on acce­s­si­bi­li­ty requi­re­ments for pro­ducts and ser­vices and Direc­ti­ve 2016/2102 on the acce­s­si­bi­li­ty of the web­sites and mobi­le appli­ca­ti­ons of public sec­tor bodies. 

Artic­le 17 – Qua­li­ty manage­ment system

1. Pro­vi­ders of high-risk AI systems shall put a qua­li­ty manage­ment system in place that ensu­res com­pli­ance with this Regu­la­ti­on. That system shall be docu­men­ted in a syste­ma­tic and order­ly man­ner in the form of writ­ten poli­ci­es, pro­ce­du­res and ins­truc­tions, and shall include at least the fol­lo­wing aspects: (a) a stra­tegy for regu­la­to­ry com­pli­ance, inclu­ding com­pli­ance with con­for­mi­ty assess­ment pro­ce­du­res and pro­ce­du­res for the manage­ment of modi­fi­ca­ti­ons to the high-risk AI system; (b) tech­ni­ques, pro­ce­du­res and syste­ma­tic actions to be used for the design, design con­trol and design veri­fi­ca­ti­on of the high-risk AI system; (c) tech­ni­ques, pro­ce­du­res and syste­ma­tic actions to be used for the deve­lo­p­ment, qua­li­ty con­trol and qua­li­ty assu­rance of the high-risk AI system; (d) exami­na­ti­on, test and vali­da­ti­on pro­ce­du­res to be car­ri­ed out befo­re, during and after the deve­lo­p­ment of the high-risk AI system, and the fre­quen­cy with which they have to be car­ri­ed out; (e) tech­ni­cal spe­ci­fi­ca­ti­ons, inclu­ding stan­dards, to be applied and, whe­re the rele­vant har­mo­ni­s­ed stan­dards are not applied in full, or do not cover all of the rele­vant requi­re­ments set out in Chap­ter II of this Tit­le, the means to be used to ensu­re that the high-risk AI system com­plies with tho­se requi­re­ments; (f) systems and pro­ce­du­res for data manage­ment, inclu­ding data acqui­si­ti­on, data coll­ec­tion, data ana­ly­sis, data label­ling, data sto­rage, data fil­tra­ti­on, data mining, data aggre­ga­ti­on, data reten­ti­on and any other ope­ra­ti­on regar­ding the data that is per­for­med befo­re and for the pur­po­ses of the pla­cing on the mar­ket or put­ting into ser­vice of high-risk AI systems; (g) the risk manage­ment system refer­red to in Artic­le 9; (h) the set­ting-up, imple­men­ta­ti­on and main­ten­an­ce of a post-mar­ket moni­to­ring system, in accordance with Artic­le 61; (i) pro­ce­du­res rela­ted to the report­ing of a serious inci­dent in accordance with Artic­le 62; (j) the hand­ling of com­mu­ni­ca­ti­on with natio­nal com­pe­tent aut­ho­ri­ties, other rele­vant aut­ho­ri­ties, inclu­ding tho­se pro­vi­ding or sup­port­ing the access to data, noti­fi­ed bodies, other ope­ra­tors, cus­to­mers or other inte­re­sted par­ties; (k) systems and pro­ce­du­res for record kee­ping of all rele­vant docu­men­ta­ti­on and infor­ma­ti­on; (l) resour­ce manage­ment, inclu­ding secu­ri­ty of sup­p­ly rela­ted mea­su­res; (m) an accoun­ta­bi­li­ty frame­work set­ting out the respon­si­bi­li­ties of the manage­ment and other staff with regard to all aspects listed in this para­graph. 2. The imple­men­ta­ti­on of aspects refer­red to in para­graph 1 shall be pro­por­tio­na­te to the size of the provider’s orga­ni­sa­ti­on. Pro­vi­ders shall in any event respect the degree of rigour and the level of pro­tec­tion requi­red to ensu­re com­pli­ance of their AI systems with this Regu­la­ti­on. 2a. For pro­vi­ders of high-risk AI systems that are sub­ject to obli­ga­ti­ons regar­ding qua­li­ty manage­ment systems or their equi­va­lent func­tion under rele­vant sec­to­ri­al Uni­on law, the aspects descri­bed in para­graph 1 may be part of the qua­li­ty manage­ment systems pur­su­ant to that law. 3. For pro­vi­ders that are finan­cial insti­tu­ti­ons sub­ject to requi­re­ments regar­ding their inter­nal gover­nan­ce, arran­ge­ments or pro­ce­s­ses under Uni­on finan­cial ser­vices legis­la­ti­on, the obli­ga­ti­on to put in place a qua­li­ty manage­ment system with the excep­ti­on of para­graph 1, points (g), (h) and (i) shall be dee­med to be ful­fil­led by com­ply­ing with the rules on inter­nal gover­nan­ce arran­ge­ments or pro­ce­s­ses pur­su­ant to the rele­vant Uni­on finan­cial ser­vices legis­la­ti­on. In that con­text, any har­mo­ni­s­ed stan­dards refer­red to in Artic­le 40 of this Regu­la­ti­on shall be taken into account. 

Artic­le 18 – Docu­men­ta­ti­on keeping

1. The pro­vi­der shall, for a peri­od ending 10 years after the AI system has been pla­ced on the mar­ket or put into ser­vice, keep at the dis­po­sal of the natio­nal com­pe­tent aut­ho­ri­ties: (a) the tech­ni­cal docu­men­ta­ti­on refer­red to in Artic­le 11; (b) the docu­men­ta­ti­on con­cer­ning the qua­li­ty manage­ment system refer­red to in Artic­le 17; (c) the docu­men­ta­ti­on con­cer­ning the chan­ges appro­ved by noti­fi­ed bodies whe­re appli­ca­ble; (d) the decis­i­ons and other docu­ments issued by the noti­fi­ed bodies whe­re appli­ca­ble; (e) the EU decla­ra­ti­on of con­for­mi­ty refer­red to in Artic­le 48. 1a. Each Mem­ber Sta­te shall deter­mi­ne con­di­ti­ons under which the docu­men­ta­ti­on refer­red to in para­graph 1 remains at the dis­po­sal of the natio­nal com­pe­tent aut­ho­ri­ties for the peri­od indi­ca­ted in that para­graph for the cases when a pro­vi­der or its aut­ho­ri­sed repre­sen­ta­ti­ve estab­lished on its ter­ri­to­ry goes bank­rupt or cea­ses its acti­vi­ty pri­or to the end of that peri­od. 2. Pro­vi­ders that are finan­cial insti­tu­ti­ons sub­ject to requi­re­ments regar­ding their inter­nal gover­nan­ce, arran­ge­ments or pro­ce­s­ses under Uni­on finan­cial ser­vices legis­la­ti­on shall main­tain the tech­ni­cal docu­men­ta­ti­on as part of the docu­men­ta­ti­on kept under the rele­vant Uni­on finan­cial ser­vices legislation. 

Artic­le 20 – Auto­ma­ti­cal­ly gene­ra­ted logs

1. Pro­vi­ders of high-risk AI systems shall keep the logs, refer­red to in Artic­le 12(1), auto­ma­ti­cal­ly gene­ra­ted by their high-risk AI systems, to the ext­ent such logs are under their con­trol. Wit­hout pre­ju­di­ce to appli­ca­ble Uni­on or natio­nal law, the logs shall be kept for a peri­od appro­pria­te to the inten­ded pur­po­se of the high-risk AI system, of at least 6 months, unless pro­vi­ded other­wi­se in appli­ca­ble Uni­on or natio­nal law, in par­ti­cu­lar in Uni­on law on the pro­tec­tion of per­so­nal data. 2. Pro­vi­ders that are finan­cial insti­tu­ti­ons sub­ject to requi­re­ments regar­ding their inter­nal gover­nan­ce, arran­ge­ments or pro­ce­s­ses under Uni­on finan­cial ser­vices legis­la­ti­on shall main­tain the logs auto­ma­ti­cal­ly gene­ra­ted by their high-risk AI systems as part of the docu­men­ta­ti­on kept under the rele­vant finan­cial ser­vice legislation. 

Artic­le 21 – Cor­rec­ti­ve actions and duty of information

Pro­vi­ders of high-risk AI systems which con­sider or have rea­son to con­sider that a high- risk AI system which they have pla­ced on the mar­ket or put into ser­vice is not in con­for­mi­ty with this Regu­la­ti­on shall imme­dia­te­ly take the neces­sa­ry cor­rec­ti­ve actions to bring that system into con­for­mi­ty, to with­draw it, to disable it, or to recall it, as appro­pria­te. They shall inform the dis­tri­bu­tors of the high-risk AI system in que­sti­on and, whe­re appli­ca­ble, the deployers, the aut­ho­ri­sed repre­sen­ta­ti­ve and importers accor­din­gly. Whe­re the high-risk AI system pres­ents a risk within the mea­ning of Artic­le 65(1) and the pro­vi­der beco­mes awa­re of that risk, they shall imme­dia­te­ly inve­sti­ga­te the cau­ses, in col­la­bo­ra­ti­on with the report­ing deployer, whe­re appli­ca­ble, and inform the mar­ket sur­veil­lan­ce aut­ho­ri­ties of the Mem­ber Sta­tes in which they made the high-risk AI system available and, whe­re appli­ca­ble, the noti­fi­ed body that issued a cer­ti­fi­ca­te for the high-risk AI system in accordance with Artic­le 44, in par­ti­cu­lar, of the natu­re of the non-com­pli­ance and of any rele­vant cor­rec­ti­ve action taken. 

Artic­le 23 – Coope­ra­ti­on with com­pe­tent authorities

1. Pro­vi­ders of high-risk AI systems shall, upon a rea­so­ned request by a com­pe­tent aut­ho­ri­ty, pro­vi­de that aut­ho­ri­ty all the infor­ma­ti­on and docu­men­ta­ti­on neces­sa­ry to demon­stra­te the con­for­mi­ty of the high-risk AI system with the requi­re­ments set out in Chap­ter 2 of this Tit­le, in a lan­guage which can be easi­ly under­s­tood by the aut­ho­ri­ty in an offi­ci­al Uni­on lan­guage deter­mi­ned by the Mem­ber Sta­te con­cer­ned. 1a. Upon a rea­so­ned request by a natio­nal com­pe­tent aut­ho­ri­ty pro­vi­ders shall also give the reque­st­ing natio­nal com­pe­tent aut­ho­ri­ty, as appli­ca­ble, access to the logs refer­red to in Artic­le 12(1) auto­ma­ti­cal­ly gene­ra­ted by the high-risk AI system to the ext­ent such logs are under their con­trol. 1b. Any infor­ma­ti­on obtai­ned by a natio­nal com­pe­tent aut­ho­ri­ty pur­su­ant to the pro­vi­si­ons of this Artic­le shall be trea­ted in com­pli­ance with the con­fi­den­tia­li­ty obli­ga­ti­ons set out in Artic­le 70. 

Artic­le 25 – Aut­ho­ri­sed representatives

1. Pri­or to making their systems available on the Uni­on mar­ket pro­vi­ders estab­lished out­side the Uni­on shall, by writ­ten man­da­te, appoint an aut­ho­ri­sed repre­sen­ta­ti­ve which is estab­lished in the Uni­on. 1b. The pro­vi­der shall enable its aut­ho­ri­sed repre­sen­ta­ti­ve to per­form its tasks under this Regu­la­ti­on. 2. The aut­ho­ri­sed repre­sen­ta­ti­ve shall per­form the tasks spe­ci­fi­ed in the man­da­te recei­ved from the pro­vi­der. It shall pro­vi­de a copy of the man­da­te to the mar­ket sur­veil­lan­ce aut­ho­ri­ties upon request, in one of the offi­ci­al lan­guages of the insti­tu­ti­on of the Uni­on deter­mi­ned by the natio­nal com­pe­tent aut­ho­ri­ty. For the pur­po­se of this Regu­la­ti­on, the man­da­te shall empower the aut­ho­ri­sed repre­sen­ta­ti­ve to car­ry out the fol­lo­wing tasks: (-a) veri­fy that the EU decla­ra­ti­on of con­for­mi­ty and the tech­ni­cal docu­men­ta­ti­on have been drawn up and that an appro­pria­te con­for­mi­ty assess­ment pro­ce­du­re has been car­ri­ed out by the pro­vi­der; (a) keep at the dis­po­sal of the natio­nal com­pe­tent aut­ho­ri­ties and natio­nal aut­ho­ri­ties refer­red to in Artic­le 63(7), for a peri­od ending 10 years after the high-risk AI system has been pla­ced on the mar­ket or put into ser­vice, the cont­act details of the pro­vi­der by which the aut­ho­ri­sed repre­sen­ta­ti­ve has been appoin­ted, a copy of the EU decla­ra­ti­on of con­for­mi­ty, the tech­ni­cal docu­men­ta­ti­on and, if appli­ca­ble, the cer­ti­fi­ca­te issued by the noti­fi­ed body; (b) pro­vi­de a natio­nal com­pe­tent aut­ho­ri­ty, upon a rea­so­ned request, with all the infor­ma­ti­on and docu­men­ta­ti­on, inclu­ding that kept accor­ding to point (a), neces­sa­ry to demon­stra­te the con­for­mi­ty of a high-risk AI system with the requi­re­ments set out in Chap­ter 2 of this Tit­le, inclu­ding access to the logs, as refer­red to in Artic­le 12(1), auto­ma­ti­cal­ly gene­ra­ted by the high-risk AI system to the ext­ent such logs are under the con­trol of the pro­vi­der; (c) coope­ra­te with com­pe­tent aut­ho­ri­ties, upon a rea­so­ned request, on any action the lat­ter takes in rela­ti­on to the high-risk AI system, in par­ti­cu­lar to redu­ce and miti­ga­te the risks posed by the high-risk AI system; (ca) whe­re appli­ca­ble, com­ply with the regi­stra­ti­on obli­ga­ti­ons refer­red in Artic­le 51(1), or, if the regi­stra­ti­on is car­ri­ed out by the pro­vi­der its­elf, ensu­re that the infor­ma­ti­on refer­red to in [point 3] of Annex VIII is cor­rect. The man­da­te shall empower the aut­ho­ri­sed repre­sen­ta­ti­ve to be addres­sed, in addi­ti­on to or instead of the pro­vi­der, by the com­pe­tent aut­ho­ri­ties, on all issues rela­ted to ensu­ring com­pli­ance with this Regu­la­ti­on. 2b. The aut­ho­ri­sed repre­sen­ta­ti­ve shall ter­mi­na­te the man­da­te if it con­siders or has rea­son to con­sider that the pro­vi­der acts con­tra­ry to its obli­ga­ti­ons under this Regu­la­ti­on. In such a case, it shall also imme­dia­te­ly inform the mar­ket sur­veil­lan­ce aut­ho­ri­ty of the Mem­ber Sta­te in which it is estab­lished, as well as, whe­re appli­ca­ble, the rele­vant noti­fi­ed body, about the ter­mi­na­ti­on of the man­da­te and the rea­sons thereof. 

Artic­le 26 – Obli­ga­ti­ons of importers

1. Befo­re pla­cing a high-risk AI system on the mar­ket, importers of such system shall ensu­re that such a system is in con­for­mi­ty with this Regu­la­ti­on by veri­fy­ing that: (a) the rele­vant con­for­mi­ty assess­ment pro­ce­du­re refer­red to in Artic­le 43 has been car­ri­ed out by the pro­vi­der of that AI system; (b) the pro­vi­der has drawn up the tech­ni­cal docu­men­ta­ti­on in accordance with Artic­le 11 and Annex IV; (c) the system bears the requi­red CE con­for­mi­ty mar­king and is accom­pa­nied by the EU decla­ra­ti­on of con­for­mi­ty and ins­truc­tions of use; (ca) the pro­vi­der has appoin­ted an aut­ho­ri­sed repre­sen­ta­ti­ve in accordance with Artic­le 25(1). 2. Whe­re an importer has suf­fi­ci­ent rea­son to con­sider that a high-risk AI system is not in con­for­mi­ty with this Regu­la­ti­on, or is fal­si­fi­ed, or accom­pa­nied by fal­si­fi­ed docu­men­ta­ti­on, it shall not place that system on the mar­ket until that AI system has been brought into con­for­mi­ty. Whe­re the high-risk AI system pres­ents a risk within the mea­ning of Artic­le 65(1), the importer shall inform the pro­vi­der of the AI system, the aut­ho­ri­sed repre­sen­ta­ti­ves and the mar­ket sur­veil­lan­ce aut­ho­ri­ties to that effect. 3. Importers shall indi­ca­te their name, regi­stered trade name or regi­stered trade­mark, and the address at which they can be cont­ac­ted on the high-risk AI system and on its pack­a­ging or its accom­pany­ing docu­men­ta­ti­on, whe­re appli­ca­ble. 4. Importers shall ensu­re that, while a high-risk AI system is under their respon­si­bi­li­ty, whe­re appli­ca­ble, sto­rage or trans­port con­di­ti­ons do not jeo­par­di­se its com­pli­ance with the requi­re­ments set out in Chap­ter 2 of this Tit­le. 4a. Importers shall keep, for a peri­od ending 10 years after the AI system has been pla­ced on the mar­ket or put into ser­vice, a copy of the cer­ti­fi­ca­te issued by the noti­fi­ed body, whe­re appli­ca­ble, of the ins­truc­tions for use and of the EU decla­ra­ti­on of con­for­mi­ty. 5. Importers shall pro­vi­de natio­nal com­pe­tent aut­ho­ri­ties, upon a rea­so­ned request, with all the neces­sa­ry infor­ma­ti­on and docu­men­ta­ti­on inclu­ding that kept in accordance with para­graph 4a to demon­stra­te the con­for­mi­ty of a high-risk AI system with the requi­re­ments set out in Chap­ter 2 of this Tit­le in a lan­guage which can be easi­ly under­s­tood by them. To this pur­po­se they shall also ensu­re that the tech­ni­cal docu­men­ta­ti­on can be made available to tho­se aut­ho­ri­ties. 5a. Importers shall coope­ra­te with natio­nal com­pe­tent aut­ho­ri­ties on any action tho­se aut­ho­ri­ties take, in par­ti­cu­lar to redu­ce and miti­ga­te the risks posed by the high-risk AI system. 

Artic­le 27 – Obli­ga­ti­ons of distributors

1. Befo­re making a high-risk AI system available on the mar­ket, dis­tri­bu­tors shall veri­fy that the high-risk AI system bears the requi­red CE con­for­mi­ty mar­king, that it is accom­pa­nied by a copy of EU decla­ra­ti­on of con­for­mi­ty and ins­truc­tion of use, and that the pro­vi­der and the importer of the system, as appli­ca­ble, have com­plied with their obli­ga­ti­ons set out in Artic­le 16, point (aa) and (b) and 26(3) respec­tively. 2. Whe­re a dis­tri­bu­tor con­siders or has rea­son to con­sider, on the basis of the infor­ma­ti­on in its pos­ses­si­on, that a high-risk AI system is not in con­for­mi­ty with the requi­re­ments set out in Chap­ter 2 of this Tit­le, it shall not make the high-risk AI system available on the mar­ket until that system has been brought into con­for­mi­ty with tho­se requi­re­ments. Fur­ther­mo­re, whe­re the system pres­ents a risk within the mea­ning of Artic­le 65(1), the dis­tri­bu­tor shall inform the pro­vi­der or the importer of the system, as appli­ca­ble, to that effect. 3. Dis­tri­bu­tors shall ensu­re that, while a high-risk AI system is under their respon­si­bi­li­ty, whe­re appli­ca­ble, sto­rage or trans­port con­di­ti­ons do not jeo­par­di­se the com­pli­ance of the system with the requi­re­ments set out in Chap­ter 2 of this Tit­le. 4. A dis­tri­bu­tor that con­siders or has rea­son to con­sider, on the basis of the infor­ma­ti­on in its pos­ses­si­on, that a high-risk AI system which it has made available on the mar­ket is not in con­for­mi­ty with the requi­re­ments set out in Chap­ter 2 of this Tit­le shall take the cor­rec­ti­ve actions neces­sa­ry to bring that system into con­for­mi­ty with tho­se requi­re­ments, to with­draw it or recall it or shall ensu­re that the pro­vi­der, the importer or any rele­vant ope­ra­tor, as appro­pria­te, takes tho­se cor­rec­ti­ve actions. Whe­re the high-risk AI system pres­ents a risk within the mea­ning of Artic­le 65(1), the dis­tri­bu­tor shall imme­dia­te­ly inform the pro­vi­der or importer of the system and the natio­nal com­pe­tent aut­ho­ri­ties of the Mem­ber Sta­tes in which it has made the pro­duct available to that effect, giving details, in par­ti­cu­lar, of the non-com­pli­ance and of any cor­rec­ti­ve actions taken. 5. Upon a rea­so­ned request from a natio­nal com­pe­tent aut­ho­ri­ty, dis­tri­bu­tors of the high-risk AI system shall pro­vi­de that aut­ho­ri­ty with all the infor­ma­ti­on and docu­men­ta­ti­on regar­ding its acti­vi­ties as descri­bed in para­graph 1 to 4 neces­sa­ry to demon­stra­te the con­for­mi­ty of a high-risk system with the requi­re­ments set out in Chap­ter 2 of this Tit­le. 5a. Dis­tri­bu­tors shall coope­ra­te with natio­nal com­pe­tent aut­ho­ri­ties on any action tho­se aut­ho­ri­ties take in rela­ti­on to an AI system, of which they are the dis­tri­bu­tor, in par­ti­cu­lar to redu­ce or miti­ga­te the risk posed by the high-risk AI system. 

Artic­le 28 – Respon­si­bi­li­ties along the AI value chain

1. Any dis­tri­bu­tor, importer, deployer or other third-par­ty shall be con­side­red a pro­vi­der of a high-risk AI system for the pur­po­ses of this Regu­la­ti­on and shall be sub­ject to the obli­ga­ti­ons of the pro­vi­der under Artic­le 16, in any of the fol­lo­wing cir­cum­stances: (a) they put their name or trade­mark on a high-risk AI system alre­a­dy pla­ced on the mar­ket or put into ser­vice, wit­hout pre­ju­di­ce to con­trac­tu­al arran­ge­ments sti­pu­la­ting that the obli­ga­ti­ons are allo­ca­ted other­wi­se; (b) they make a sub­stan­ti­al modi­fi­ca­ti­on to a high-risk AI system that has alre­a­dy been pla­ced on the mar­ket or has alre­a­dy been put into ser­vice and in a way that it remains a high-risk AI system in accordance with Artic­le 6; (ba) they modi­fy the inten­ded pur­po­se of an AI system, inclu­ding a gene­ral pur­po­se AI system, which has not been clas­si­fi­ed as high-risk and has alre­a­dy been pla­ced on the mar­ket or put into ser­vice in such man­ner that the AI system beco­mes a high risk AI system in accordance with Artic­le 6. 2. Whe­re the cir­cum­stances refer­red to in para­graph 1, point (a) to (ba) occur, the pro­vi­der that initi­al­ly pla­ced the AI system on the mar­ket or put it into ser­vice shall no lon­ger be con­side­red a pro­vi­der of that spe­ci­fic AI system for the pur­po­ses of this Regu­la­ti­on. This for­mer pro­vi­der shall clo­se­ly coope­ra­te and shall make available the neces­sa­ry infor­ma­ti­on and pro­vi­de the rea­son­ab­ly expec­ted tech­ni­cal access and other assi­stance that are requi­red for the ful­film­ent of the obli­ga­ti­ons set out in this Regu­la­ti­on, in par­ti­cu­lar regar­ding the com­pli­ance with the con­for­mi­ty assess­ment of high-risk AI systems. This para­graph shall not app­ly in the cases whe­re the for­mer pro­vi­der has express­ly exclu­ded the chan­ge of its system into a high-risk system and the­r­e­fo­re the obli­ga­ti­on to hand over the docu­men­ta­ti­on. 2a. For high-risk AI systems that are safe­ty com­pon­ents of pro­ducts to which the legal acts listed in Annex II, sec­tion A app­ly, the manu­fac­tu­rer of tho­se pro­ducts shall be con­side­red the pro­vi­der of the high-risk AI system and shall be sub­ject to the obli­ga­ti­ons under Artic­le 16 under eit­her of the fol­lo­wing sce­na­ri­os: (i) the high-risk AI system is pla­ced on the mar­ket tog­e­ther with the pro­duct under the name or trade­mark of the pro­duct manu­fac­tu­rer; (ii) the high-risk AI system is put into ser­vice under the name or trade­mark of the pro­duct manu­fac­tu­rer after the pro­duct has been pla­ced on the mar­ket. 2b. The pro­vi­der of a high risk AI system and the third par­ty that sup­plies an AI system, tools, ser­vices, com­pon­ents, or pro­ce­s­ses that are used or inte­gra­ted in a high-risk AI system shall, by writ­ten agree­ment, spe­ci­fy the neces­sa­ry infor­ma­ti­on, capa­bi­li­ties, tech­ni­cal access and other assi­stance based on the gene­ral­ly ack­now­led­ged sta­te of the art, in order to enable the pro­vi­der of the high risk AI system to ful­ly com­ply with the obli­ga­ti­ons set out in this Regu­la­ti­on. This obli­ga­ti­on shall not app­ly to third par­ties making acce­s­si­ble to the public tools, ser­vices, pro­ce­s­ses, or AI com­pon­ents other than gene­ral-pur­po­se AI models under a free and open licence. The AI Office may deve­lop and recom­mend vol­un­t­a­ry model con­trac­tu­al terms bet­ween pro­vi­ders of high-risk AI systems and third par­ties that sup­p­ly tools, ser­vices, com­pon­ents or pro­ce­s­ses that are used or inte­gra­ted in high-risk AI systems. When deve­lo­ping vol­un­t­a­ry model con­trac­tu­al terms, the AI Office shall take into account pos­si­ble con­trac­tu­al requi­re­ments appli­ca­ble in spe­ci­fic sec­tors or busi­ness cases. The model con­trac­tu­al terms shall be published and be available free of char­ge in an easi­ly usable elec­tro­nic for­mat. 2b. Para­graphs 2 and 2a are wit­hout pre­ju­di­ce to the need to respect and pro­tect intellec­tu­al pro­per­ty rights and con­fi­den­ti­al busi­ness infor­ma­ti­on or trade secrets in accordance with Uni­on and natio­nal law. 

Artic­le 29 – Obli­ga­ti­ons of deployers of high-risk AI systems

1. Deployers of high-risk AI systems shall take appro­pria­te tech­ni­cal and orga­ni­sa­tio­nal mea­su­res to ensu­re they use such systems in accordance with the ins­truc­tions of use accom­pany­ing the systems, pur­su­ant to para­graphs 2 and 5 of this Artic­le. 1a. To the ext­ent deployers exer­cise con­trol over the high-risk AI system, they shall ensu­re that the natu­ral per­sons assi­gned to ensu­re human over­sight of the high-risk AI systems have the neces­sa­ry com­pe­tence, trai­ning and aut­ho­ri­ty as well as the neces­sa­ry sup­port. 2. The obli­ga­ti­ons in para­graph 1 and 1a, are wit­hout pre­ju­di­ce to other deployer obli­ga­ti­ons under Uni­on or natio­nal law and to the deployer’s dis­creti­on in orga­ni­s­ing its own resour­ces and acti­vi­ties for the pur­po­se of imple­men­ting the human over­sight mea­su­res indi­ca­ted by the pro­vi­der. 3. Wit­hout pre­ju­di­ce to para­graph 1 and 1a, to the ext­ent the deployer exer­cis­es con­trol over the input data, that deployer shall ensu­re that input data is rele­vant and suf­fi­ci­ent­ly repre­sen­ta­ti­ve in view of the inten­ded pur­po­se of the high-risk AI system. 4. Deployers shall moni­tor the ope­ra­ti­on of the high-risk AI system on the basis of the ins­truc­tions of use and when rele­vant, inform pro­vi­ders in accordance with Artic­le 61. When they have rea­sons to con­sider that the use in accordance with the ins­truc­tions of use may result in the AI system pre­sen­ting a risk within the mea­ning of Artic­le 65(1) they shall, wit­hout undue delay, inform the pro­vi­der or dis­tri­bu­tor and rele­vant mar­ket sur­veil­lan­ce aut­ho­ri­ty and sus­pend the use of the system. They shall also imme­dia­te­ly inform first the pro­vi­der, and then the importer or dis­tri­bu­tor and rele­vant mar­ket sur­veil­lan­ce aut­ho­ri­ties when they have iden­ti­fi­ed any serious inci­dent. If the deployer is not able to reach the pro­vi­der, Artic­le 62 shall app­ly muta­tis mut­an­dis. This obli­ga­ti­on shall not cover sen­si­ti­ve ope­ra­tio­nal data of deployers of AI systems which are law enforce­ment aut­ho­ri­ties. For deployers that are finan­cial insti­tu­ti­ons sub­ject to requi­re­ments regar­ding their inter­nal gover­nan­ce, arran­ge­ments or pro­ce­s­ses under Uni­on finan­cial ser­vices legis­la­ti­on, the moni­to­ring obli­ga­ti­on set out in the first sub­pa­ra­graph shall be dee­med to be ful­fil­led by com­ply­ing with the rules on inter­nal gover­nan­ce arran­ge­ments, pro­ce­s­ses and mecha­nisms pur­su­ant to the rele­vant finan­cial ser­vice legis­la­ti­on. 5. Deployers of high-risk AI systems shall keep the logs auto­ma­ti­cal­ly gene­ra­ted by that high-risk AI system to the ext­ent such logs are under their con­trol for a peri­od appro­pria­te to the inten­ded pur­po­se of the high-risk AI system, of at least six months, unless pro­vi­ded other­wi­se in appli­ca­ble Uni­on or natio­nal law, in par­ti­cu­lar in Uni­on law on the pro­tec­tion of per­so­nal data. Deployers that are finan­cial insti­tu­ti­ons sub­ject to requi­re­ments regar­ding their inter­nal gover­nan­ce, arran­ge­ments or pro­ce­s­ses under Uni­on finan­cial ser­vices legis­la­ti­on shall main­tain the logs as part of the docu­men­ta­ti­on kept pur­su­ant to the rele­vant Uni­on finan­cial ser­vice legis­la­ti­on. (a) Pri­or to put­ting into ser­vice or use a high-risk AI system at the work­place, deployers who are employers shall inform workers repre­sen­ta­ti­ves and the affec­ted workers that they will be sub­ject to the system. This infor­ma­ti­on shall be pro­vi­ded, whe­re appli­ca­ble, in accordance with the rules and pro­ce­du­res laid down in Uni­on and natio­nal law and prac­ti­ce on infor­ma­ti­on of workers and their repre­sen­ta­ti­ves. (b) Deployers of high-risk AI systems that are public aut­ho­ri­ties or Uni­on insti­tu­ti­ons, bodies, offices and agen­ci­es shall com­ply with the regi­stra­ti­on obli­ga­ti­ons refer­red to in Artic­le 51. When they find that the system that they envi­sa­ge to use has not been regi­stered in the EU data­ba­se refer­red to in Artic­le 60 they shall not use that system and shall inform the pro­vi­der or the dis­tri­bu­tor. 6. Whe­re appli­ca­ble, deployers of high-risk AI systems shall use the infor­ma­ti­on pro­vi­ded under Artic­le 13 to com­ply with their obli­ga­ti­on to car­ry out a data pro­tec­tion impact assess­ment under Artic­le 35 of Regu­la­ti­on (EU) 2016/679 or Artic­le 27 of Direc­ti­ve (EU) 2016/680. 6a. Wit­hout pre­ju­di­ce to Direc­ti­ve (EU) 2016/680, in the frame­work of an inve­sti­ga­ti­on for the tar­ge­ted search of a per­son con­vic­ted or suspec­ted of having com­mit­ted a cri­mi­nal offence, the deployer of an AI system for post-remo­te bio­me­tric iden­ti­fi­ca­ti­on shall request an aut­ho­ri­sa­ti­on, pri­or, or wit­hout undue delay and no later than 48 hours, by a judi­cial aut­ho­ri­ty or an admi­ni­stra­ti­ve aut­ho­ri­ty who­se decis­i­on is bin­ding and sub­ject to judi­cial review, for the use of the system, except when the system is used for the initi­al iden­ti­fi­ca­ti­on of a poten­ti­al suspect based on objec­ti­ve and veri­fia­ble facts direct­ly lin­ked to the offence. Each use shall be limi­t­ed to what is strict­ly neces­sa­ry for the inve­sti­ga­ti­on of a spe­ci­fic cri­mi­nal offence. If the reque­sted aut­ho­ri­sa­ti­on pro­vi­ded for in the first sub­pa­ra­graph of this para­graph is rejec­ted, the use of the post remo­te bio­me­tric iden­ti­fi­ca­ti­on system lin­ked to that aut­ho­ri­sa­ti­on shall be stop­ped with imme­dia­te effect and the per­so­nal data lin­ked to the use of the system for which the aut­ho­ri­sa­ti­on was reque­sted shall be dele­ted. In any case, such AI system for post remo­te bio­me­tric iden­ti­fi­ca­ti­on shall not be used for law enforce­ment pur­po­ses in an unt­ar­ge­ted way, wit­hout any link to a cri­mi­nal offence, a cri­mi­nal pro­ce­e­ding, a genui­ne and pre­sent or genui­ne and fore­seeable thre­at of a cri­mi­nal offence or the search for a spe­ci­fic miss­ing per­son. It shall be ensu­red that no decis­i­on that pro­du­ces an adver­se legal effect on a per­son may be taken by the law enforce­ment aut­ho­ri­ties sole­ly based on the out­put of the­se post remo­te bio­me­tric iden­ti­fi­ca­ti­on systems. This para­graph is wit­hout pre­ju­di­ce to the pro­vi­si­ons of Artic­le 10 of the Direc­ti­ve (EU) 2016/680 and Artic­le 9 of the GDPR for the pro­ce­s­sing of bio­me­tric data. Regard­less of the pur­po­se or deployer, each use of the­se systems shall be docu­men­ted in the rele­vant poli­ce file and shall be made available to the rele­vant mar­ket sur­veil­lan­ce aut­ho­ri­ty and the natio­nal data pro­tec­tion aut­ho­ri­ty upon request, exclu­ding the dis­clo­sure of sen­si­ti­ve ope­ra­tio­nal data rela­ted to law enforce­ment. This sub­pa­ra­graph shall be wit­hout pre­ju­di­ce to the powers con­fer­red by the Direc­ti­ve 2016/680 to super­vi­so­ry aut­ho­ri­ties. Deployers shall, in addi­ti­on, sub­mit annu­al reports to the rele­vant mar­ket sur­veil­lan­ce and natio­nal data pro­tec­tion aut­ho­ri­ties on the uses of post-remo­te bio­me­tric iden­ti­fi­ca­ti­on systems, exclu­ding the dis­clo­sure of sen­si­ti­ve ope­ra­tio­nal data rela­ted to law enforce­ment. The reports can be aggre­ga­ted to cover seve­ral deployments in one ope­ra­ti­on. Mem­ber Sta­tes may intro­du­ce, in accordance with Uni­on law, more rest­ric­ti­ve laws on the use of post remo­te bio­me­tric iden­ti­fi­ca­ti­on systems. 6b. Wit­hout pre­ju­di­ce to Artic­le 52, deployers of high-risk AI systems refer­red to in Annex III that make decis­i­ons or assist in making decis­i­ons rela­ted to natu­ral per­sons shall inform the natu­ral per­sons that they are sub­ject to the use of the high-risk AI system. For high risk AI systems used for law enforce­ment pur­po­ses Artic­le 13 of Direc­ti­ve 2016/680 shall app­ly. 6c. Deployers shall coope­ra­te with the rele­vant natio­nal com­pe­tent aut­ho­ri­ties on any action tho­se aut­ho­ri­ties take in rela­ti­on with the high-risk system in order to imple­ment this Regulation. 

Artic­le 29a – Fun­da­men­tal rights impact assess­ment for high-risk AI systems

1. Pri­or to deploying a high-risk AI system as defi­ned in Artic­le 6(2), with the excep­ti­on of AI systems inten­ded to be used in the area listed in point 2 of Annex III, deployers that are bodies gover­ned by public law or pri­va­te ope­ra­tors pro­vi­ding public ser­vices and ope­ra­tors deploying high-risk systems refer­red to in Annex III, point 5, (b) and (ca) shall per­form an assess­ment of the impact on fun­da­men­tal rights that the use of the system may pro­du­ce. For that pur­po­se, deployers shall per­form an assess­ment con­si­sting of: (a) a descrip­ti­on of the deployer’s pro­ce­s­ses in which the high-risk AI system will be used in line with its inten­ded pur­po­se; (b) a descrip­ti­on of the peri­od of time and fre­quen­cy in which each high-risk AI system is inten­ded to be used; (c) the cate­go­ries of natu­ral per­sons and groups likely to be affec­ted by its use in the spe­ci­fic con­text; (d) the spe­ci­fic risks of harm likely to impact the cate­go­ries of per­sons or group of per­sons iden­ti­fi­ed pur­su­ant point (c), taking into account the infor­ma­ti­on given by the pro­vi­der pur­su­ant to Artic­le 13; (e) a descrip­ti­on of the imple­men­ta­ti­on of human over­sight mea­su­res, accor­ding to the ins­truc­tions of use; (f) the mea­su­res to be taken in case of the mate­ria­lizati­on of the­se risks, inclu­ding their arran­ge­ments for inter­nal gover­nan­ce and com­plaint mecha­nisms. 2. The obli­ga­ti­on laid down in para­graph 1 applies to the first use of the high-risk AI system. The deployer may, in simi­lar cases, rely on pre­vious­ly con­duc­ted fun­da­men­tal rights impact assess­ments or exi­sting impact assess­ments car­ri­ed out by pro­vi­der. If, during the use of the high-risk AI system, the deployer con­siders that any of the fac­tors listed in para­graph 1 chan­ge are or no lon­ger up to date, the deployer will take the neces­sa­ry steps to update the infor­ma­ti­on. 3. Once the impact assess­ment has been per­for­med, the deployer shall noti­fy the mar­ket sur­veil­lan­ce aut­ho­ri­ty of the results of the assess­ment, sub­mit­ting the fil­led tem­p­la­te refer­red to in para­graph 5 as a part of the noti­fi­ca­ti­on. In the case refer­red to in Artic­le 47(1), deployers may be exempt­ed from the­se obli­ga­ti­ons. 4. If any of the obli­ga­ti­ons laid down in this artic­le are alre­a­dy met through the data pro­tec­tion impact assess­ment con­duc­ted pur­su­ant to Artic­le 35 of Regu­la­ti­on (EU) 2016/679 or Artic­le 27 of Direc­ti­ve (EU) 2016/680, the fun­da­men­tal rights impact assess­ment refer­red to in para­graph 1 shall be con­duc­ted in con­junc­tion with that data pro­tec­tion impact assess­ment. 5. The AI Office shall deve­lop a tem­p­la­te for a que­sti­on­n­aire, inclu­ding through an auto­ma­ted tool, to faci­li­ta­te deployers to imple­ment the obli­ga­ti­ons of this Artic­le in a sim­pli­fi­ed manner. 

Chap­ter 4 NOTIFIYING AUTHORITIES AND NOTIFIED BODIES

Artic­le 30 – Noti­fy­ing authorities

1. Each Mem­ber Sta­te shall desi­gna­te or estab­lish at least one noti­fy­ing aut­ho­ri­ty respon­si­ble for set­ting up and car­ry­ing out the neces­sa­ry pro­ce­du­res for the assess­ment, desi­gna­ti­on and noti­fi­ca­ti­on of con­for­mi­ty assess­ment bodies and for their moni­to­ring. The­se pro­ce­du­res shall be deve­lo­ped in coope­ra­ti­on bet­ween the noti­fy­ing aut­ho­ri­ties of all Mem­ber Sta­tes. 2. Mem­ber Sta­tes may deci­de that the assess­ment and moni­to­ring refer­red to in para­graph 1 shall be car­ri­ed out by a natio­nal accre­di­ta­ti­on body within the mea­ning of and in accordance with Regu­la­ti­on (EC) No 765/2008. 3. Noti­fy­ing aut­ho­ri­ties shall be estab­lished, orga­ni­s­ed and ope­ra­ted in such a way that no con­flict of inte­rest ari­ses with con­for­mi­ty assess­ment bodies and the objec­ti­vi­ty and impar­tia­li­ty of their acti­vi­ties are safe­guard­ed. 4. Noti­fy­ing aut­ho­ri­ties shall be orga­ni­s­ed in such a way that decis­i­ons rela­ting to the noti­fi­ca­ti­on of con­for­mi­ty assess­ment bodies are taken by com­pe­tent per­sons dif­fe­rent from tho­se who car­ri­ed out the assess­ment of tho­se bodies. 5. Noti­fy­ing aut­ho­ri­ties shall not offer or pro­vi­de any acti­vi­ties that con­for­mi­ty assess­ment bodies per­form or any con­sul­tan­cy ser­vices on a com­mer­cial or com­pe­ti­ti­ve basis. 6. Noti­fy­ing aut­ho­ri­ties shall safe­guard the con­fi­den­tia­li­ty of the infor­ma­ti­on they obtain in accordance with Artic­le 70. 7. Noti­fy­ing aut­ho­ri­ties shall have an ade­qua­te num­ber of com­pe­tent per­son­nel at their dis­po­sal for the pro­per per­for­mance of their tasks. Com­pe­tent per­son­nel shall have the neces­sa­ry exper­ti­se, whe­re appli­ca­ble, for their func­tion, in fields such as infor­ma­ti­on tech­no­lo­gies, arti­fi­ci­al intel­li­gence and law, inclu­ding the super­vi­si­on of fun­da­men­tal rights. 

Artic­le 31 – Appli­ca­ti­on of a con­for­mi­ty assess­ment body for notification

1. Con­for­mi­ty assess­ment bodies shall sub­mit an appli­ca­ti­on for noti­fi­ca­ti­on to the noti­fy­ing aut­ho­ri­ty of the Mem­ber Sta­te in which they are estab­lished. 2. The appli­ca­ti­on for noti­fi­ca­ti­on shall be accom­pa­nied by a descrip­ti­on of the con­for­mi­ty assess­ment acti­vi­ties, the con­for­mi­ty assess­ment modu­le or modu­les and the types of AI systems for which the con­for­mi­ty assess­ment body claims to be com­pe­tent, as well as by an accre­di­ta­ti­on cer­ti­fi­ca­te, whe­re one exists, issued by a natio­nal accre­di­ta­ti­on body attest­ing that the con­for­mi­ty assess­ment body ful­fils the requi­re­ments laid down in Artic­le 33. Any valid docu­ment rela­ted to exi­sting desi­gna­ti­ons of the appli­cant noti­fi­ed body under any other Uni­on har­mo­ni­sa­ti­on legis­la­ti­on shall be added. 3. Whe­re the con­for­mi­ty assess­ment body con­cer­ned can­not pro­vi­de an accre­di­ta­ti­on cer­ti­fi­ca­te, it shall pro­vi­de the noti­fy­ing aut­ho­ri­ty with all the docu­men­ta­ry evi­dence neces­sa­ry for the veri­fi­ca­ti­on, reco­gni­ti­on and regu­lar moni­to­ring of its com­pli­ance with the requi­re­ments laid down in Artic­le 33. For noti­fi­ed bodies which are desi­gna­ted under any other Uni­on har­mo­ni­sa­ti­on legis­la­ti­on, all docu­ments and cer­ti­fi­ca­tes lin­ked to tho­se desi­gna­ti­ons may be used to sup­port their desi­gna­ti­on pro­ce­du­re under this Regu­la­ti­on, as appro­pria­te. The noti­fi­ed body shall update the docu­men­ta­ti­on refer­red to in para­graph 2 and para­graph 3 when­ever rele­vant chan­ges occur, in order to enable the aut­ho­ri­ty respon­si­ble for noti­fi­ed bodies to moni­tor and veri­fy con­ti­nuous com­pli­ance with all the requi­re­ments laid down in Artic­le 33. 

Artic­le 32 – Noti­fi­ca­ti­on procedure

1. Noti­fy­ing aut­ho­ri­ties may only noti­fy con­for­mi­ty assess­ment bodies which have satis­fied the requi­re­ments laid down in Artic­le 33. 2. Noti­fy­ing aut­ho­ri­ties shall noti­fy the Com­mis­si­on and the other Mem­ber Sta­tes using the elec­tro­nic noti­fi­ca­ti­on tool deve­lo­ped and mana­ged by the Com­mis­si­on of each con­for­mi­ty assess­ment body refer­red to in para­graph 1. 3. The noti­fi­ca­ti­on refer­red to in para­graph 2 shall include full details of the con­for­mi­ty assess­ment acti­vi­ties, the con­for­mi­ty assess­ment modu­le or modu­les and the types of AI systems con­cer­ned and the rele­vant atte­sta­ti­on of com­pe­tence. Whe­re a noti­fi­ca­ti­on is not based on an accre­di­ta­ti­on cer­ti­fi­ca­te as refer­red to in Artic­le 31 (2), the noti­fy­ing aut­ho­ri­ty shall pro­vi­de the Com­mis­si­on and the other Mem­ber Sta­tes with docu­men­ta­ry evi­dence which attests to the con­for­mi­ty assess­ment body’s com­pe­tence and the arran­ge­ments in place to ensu­re that that body will be moni­to­red regu­lar­ly and will con­ti­n­ue to satis­fy the requi­re­ments laid down in Artic­le 33. 4. The con­for­mi­ty assess­ment body con­cer­ned may per­form the acti­vi­ties of a noti­fi­ed body only whe­re no objec­tions are rai­sed by the Com­mis­si­on or the other Mem­ber Sta­tes within two weeks of a noti­fi­ca­ti­on by a noti­fy­ing aut­ho­ri­ty whe­re it inclu­des an accre­di­ta­ti­on cer­ti­fi­ca­te refer­red to in Artic­le 31(2), or within two months of a noti­fi­ca­ti­on by the noti­fy­ing aut­ho­ri­ty whe­re it inclu­des docu­men­ta­ry evi­dence refer­red to in Artic­le 31(3). 4a. Whe­re objec­tions are rai­sed, the Com­mis­si­on shall wit­hout delay enter into con­sul­ta­ti­on with the rele­vant Mem­ber Sta­tes and the con­for­mi­ty assess­ment body. In view the­reof, the Com­mis­si­on shall deci­de whe­ther the aut­ho­ri­sa­ti­on is justi­fi­ed or not. The Com­mis­si­on shall address its decis­i­on to the Mem­ber Sta­te con­cer­ned and the rele­vant con­for­mi­ty assess­ment body. 

Artic­le 33 – Requi­re­ments rela­ting to noti­fi­ed bodies

1. A noti­fi­ed body shall be estab­lished under natio­nal law of a Mem­ber Sta­te and have legal per­so­na­li­ty. 2. Noti­fi­ed bodies shall satis­fy the orga­ni­sa­tio­nal, qua­li­ty manage­ment, resour­ces and pro­cess requi­re­ments that are neces­sa­ry to ful­fil their tasks, as well as sui­ta­ble cyber­se­cu­ri­ty requi­re­ments. 3. The orga­ni­sa­tio­nal struc­tu­re, allo­ca­ti­on of respon­si­bi­li­ties, report­ing lines and ope­ra­ti­on of noti­fi­ed bodies shall be such as to ensu­re that the­re is con­fi­dence in the per­for­mance by and in the results of the con­for­mi­ty assess­ment acti­vi­ties that the noti­fi­ed bodies con­duct. 4. Noti­fi­ed bodies shall be inde­pen­dent of the pro­vi­der of a high-risk AI system in rela­ti­on to which it per­forms con­for­mi­ty assess­ment acti­vi­ties. Noti­fi­ed bodies shall also be inde­pen­dent of any other ope­ra­tor having an eco­no­mic inte­rest in the high-risk AI system that is asses­sed, as well as of any com­pe­ti­tors of the pro­vi­der. This shall not pre­clude the use of asses­sed AI systems that are neces­sa­ry for the ope­ra­ti­ons of the con­for­mi­ty assess­ment body or the use of such systems for per­so­nal pur­po­ses. 4a. A con­for­mi­ty assess­ment body, its top-level manage­ment and the per­son­nel respon­si­ble for car­ry­ing out the con­for­mi­ty assess­ment tasks shall not be direct­ly invol­ved in the design, deve­lo­p­ment, mar­ke­ting or use of high-risk AI systems, or repre­sent the par­ties enga­ged in tho­se acti­vi­ties. They shall not enga­ge in any acti­vi­ty that may con­flict with their inde­pen­dence of jud­ge­ment or inte­gri­ty in rela­ti­on to con­for­mi­ty assess­ment acti­vi­ties for which they are noti­fi­ed. This shall in par­ti­cu­lar app­ly to con­sul­tan­cy ser­vices. 5. Noti­fi­ed bodies shall be orga­ni­s­ed and ope­ra­ted so as to safe­guard the inde­pen­dence, objec­ti­vi­ty and impar­tia­li­ty of their acti­vi­ties. Noti­fi­ed bodies shall docu­ment and imple­ment a struc­tu­re and pro­ce­du­res to safe­guard impar­tia­li­ty and to pro­mo­te and app­ly the prin­ci­ples of impar­tia­li­ty throug­hout their orga­ni­sa­ti­on, per­son­nel and assess­ment acti­vi­ties. 6. Noti­fi­ed bodies shall have docu­men­ted pro­ce­du­res in place ensu­ring that their per­son­nel, com­mit­tees, sub­si­dia­ries, sub­con­trac­tors and any asso­cia­ted body or per­son­nel of exter­nal bodies respect the con­fi­den­tia­li­ty of the infor­ma­ti­on in accordance with Artic­le 70 which comes into their pos­ses­si­on during the per­for­mance of con­for­mi­ty assess­ment acti­vi­ties, except when dis­clo­sure is requi­red by law. The staff of noti­fi­ed bodies shall be bound to obser­ve pro­fes­sio­nal sec­re­cy with regard to all infor­ma­ti­on obtai­ned in car­ry­ing out their tasks under this Regu­la­ti­on, except in rela­ti­on to the noti­fy­ing aut­ho­ri­ties of the Mem­ber Sta­te in which their acti­vi­ties are car­ri­ed out. 7. Noti­fi­ed bodies shall have pro­ce­du­res for the per­for­mance of acti­vi­ties which take due account of the size of an under­ta­king, the sec­tor in which it ope­ra­tes, its struc­tu­re, the degree of com­ple­xi­ty of the AI system in que­sti­on. 8. Noti­fi­ed bodies shall take out appro­pria­te lia­bi­li­ty insu­rance for their con­for­mi­ty assess­ment acti­vi­ties, unless lia­bi­li­ty is assu­med by the Mem­ber Sta­te in which they are estab­lished in accordance with natio­nal law or that Mem­ber Sta­te is its­elf direct­ly respon­si­ble for the con­for­mi­ty assess­ment. 9. Noti­fi­ed bodies shall be capa­ble of car­ry­ing out all the tasks fal­ling to them under this Regu­la­ti­on with the hig­hest degree of pro­fes­sio­nal inte­gri­ty and the requi­si­te com­pe­tence in the spe­ci­fic field, whe­ther tho­se tasks are car­ri­ed out by noti­fi­ed bodies them­sel­ves or on their behalf and under their respon­si­bi­li­ty. 10. Noti­fi­ed bodies shall have suf­fi­ci­ent inter­nal com­pe­ten­ces to be able to effec­tively eva­lua­te the tasks con­duc­ted by exter­nal par­ties on their behalf. The noti­fi­ed body shall have per­ma­nent avai­la­bi­li­ty of suf­fi­ci­ent admi­ni­stra­ti­ve, tech­ni­cal, legal and sci­en­ti­fic per­son­nel who pos­sess expe­ri­ence and know­ledge rela­ting to the rele­vant types of arti­fi­ci­al intel­li­gence systems, data and data com­pu­ting and to the requi­re­ments set out in Chap­ter 2 of this Tit­le. 11. Noti­fi­ed bodies shall par­ti­ci­pa­te in coor­di­na­ti­on acti­vi­ties as refer­red to in Artic­le 38. They shall also take part direct­ly or be repre­sen­ted in Euro­pean stan­dar­di­sati­on orga­ni­sa­ti­ons, or ensu­re that they are awa­re and up to date in respect of rele­vant standards. 

Artic­le 33a – Pre­sump­ti­on of con­for­mi­ty with requi­re­ments rela­ting to noti­fi­ed bodies

Whe­re a con­for­mi­ty assess­ment body demon­stra­tes its con­for­mi­ty with the cri­te­ria laid down in the rele­vant har­mo­ni­s­ed stan­dards or parts the­reof the refe­ren­ces of which have been published in the Offi­ci­al Jour­nal of the Euro­pean Uni­on it shall be pre­su­med to com­ply with the requi­re­ments set out in Artic­le 33 in so far as the appli­ca­ble har­mo­ni­s­ed stan­dards cover tho­se requirements. 

Artic­le 34 – Sub­si­dia­ries of and sub­con­trac­ting by noti­fi­ed bodies

1. Whe­re a noti­fi­ed body sub­con­tracts spe­ci­fic tasks con­nec­ted with the con­for­mi­ty assess­ment or has recour­se to a sub­si­dia­ry, it shall ensu­re that the sub­con­trac­tor or the sub­si­dia­ry meets the requi­re­ments laid down in Artic­le 33 and shall inform the noti­fy­ing aut­ho­ri­ty accor­din­gly. 2. Noti­fi­ed bodies shall take full respon­si­bi­li­ty for the tasks per­for­med by sub­con­trac­tors or sub­si­dia­ries whe­re­ver the­se are estab­lished. 3. Acti­vi­ties may be sub­con­trac­ted or car­ri­ed out by a sub­si­dia­ry only with the agree­ment of the pro­vi­der. Noti­fi­ed bodies shall make a list of their sub­si­dia­ries publicly available. 4. The rele­vant docu­ments con­cer­ning the assess­ment of the qua­li­fi­ca­ti­ons of the sub­con­trac­tor or the sub­si­dia­ry and the work car­ri­ed out by them under this Regu­la­ti­on shall be kept at the dis­po­sal of the noti­fy­ing aut­ho­ri­ty for a peri­od of 5 years from the ter­mi­na­ti­on date of the sub­con­trac­ting activity. 

Artic­le 34a – Ope­ra­tio­nal obli­ga­ti­ons of noti­fi­ed bodies

1. Noti­fi­ed bodies shall veri­fy the con­for­mi­ty of high-risk AI system in accordance with the con­for­mi­ty assess­ment pro­ce­du­res refer­red to in Artic­le 43. 2. Noti­fi­ed bodies shall per­form their acti­vi­ties while avo­i­ding unneces­sa­ry bur­dens for pro­vi­ders, and taking due account of the size of an under­ta­king, the sec­tor in which it ope­ra­tes, its struc­tu­re and the degree of com­ple­xi­ty of the high risk AI system in que­sti­on. In so doing, the noti­fi­ed body shall nevert­hel­ess respect the degree of rigour and the level of pro­tec­tion requi­red for the com­pli­ance of the high risk AI system with the requi­re­ments of this Regu­la­ti­on. Par­ti­cu­lar atten­ti­on shall be paid to mini­mi­sing admi­ni­stra­ti­ve bur­dens and com­pli­ance costs for micro and small enter­pri­ses as defi­ned in Com­mis­si­on Recom­men­da­ti­on 2003/361/EC. 3. Noti­fi­ed bodies shall make available and sub­mit upon request all rele­vant docu­men­ta­ti­on, inclu­ding the pro­vi­ders’ docu­men­ta­ti­on, to the noti­fy­ing aut­ho­ri­ty refer­red to in Artic­le 30 to allow that aut­ho­ri­ty to con­duct its assess­ment, desi­gna­ti­on, noti­fi­ca­ti­on, moni­to­ring acti­vi­ties and to faci­li­ta­te the assess­ment out­lined in this Chapter. 

Artic­le 35 – Iden­ti­fi­ca­ti­on num­bers and lists of noti­fi­ed bodies desi­gna­ted under this Regulation

1. The Com­mis­si­on shall assign an iden­ti­fi­ca­ti­on num­ber to noti­fi­ed bodies. It shall assign a sin­gle num­ber, even whe­re a body is noti­fi­ed under seve­ral Uni­on acts. 2. The Com­mis­si­on shall make publicly available the list of the bodies noti­fi­ed under this Regu­la­ti­on, inclu­ding the iden­ti­fi­ca­ti­on num­bers that have been assi­gned to them and the acti­vi­ties for which they have been noti­fi­ed. The Com­mis­si­on shall ensu­re that the list is kept up to date. 

Artic­le 36 – Chan­ges to notifications

‑1. The noti­fy­ing aut­ho­ri­ty shall noti­fy the Com­mis­si­on and the other Mem­ber Sta­tes of any rele­vant chan­ges to the noti­fi­ca­ti­on of a noti­fi­ed body via the elec­tro­nic noti­fi­ca­ti­on tool refer­red to in Artic­le 32(2). ‑1a. The pro­ce­du­res descri­bed in Artic­le 31 and 32 shall app­ly to exten­si­ons of the scope of the noti­fi­ca­ti­on. For chan­ges to the noti­fi­ca­ti­on other than exten­si­ons of its scope, the pro­ce­du­res laid down in the fol­lo­wing para­graphs shall app­ly. Whe­re a noti­fi­ed body deci­des to cea­se its con­for­mi­ty assess­ment acti­vi­ties it shall inform the noti­fy­ing aut­ho­ri­ty and the pro­vi­ders con­cer­ned as soon as pos­si­ble and in the case of a plan­ned ces­sa­ti­on one year befo­re cea­sing its acti­vi­ties. The cer­ti­fi­ca­tes may remain valid for a tem­po­ra­ry peri­od of nine months after ces­sa­ti­on of the noti­fi­ed body’s acti­vi­ties on con­di­ti­on that ano­ther noti­fi­ed body has con­firm­ed in wri­ting that it will assu­me respon­si­bi­li­ties for the AI systems cover­ed by tho­se cer­ti­fi­ca­tes. The new noti­fi­ed body shall com­ple­te a full assess­ment of the AI systems affec­ted by the end of that peri­od befo­re issuing new cer­ti­fi­ca­tes for tho­se systems. Whe­re the noti­fi­ed body has cea­sed its acti­vi­ty, the noti­fy­ing aut­ho­ri­ty shall with­draw the desi­gna­ti­on. 1. Whe­re a noti­fy­ing aut­ho­ri­ty has suf­fi­ci­ent rea­sons to con­sider that a noti­fi­ed body no lon­ger meets the requi­re­ments laid down in Artic­le 33, or that it is fai­ling to ful­fil its obli­ga­ti­ons, the noti­fy­ing aut­ho­ri­ty shall wit­hout delay inve­sti­ga­te the mat­ter with the utmost dili­gence. In that con­text, it shall inform the noti­fi­ed body con­cer­ned about the objec­tions rai­sed and give it the pos­si­bi­li­ty to make its views known. If the noti­fy­ing aut­ho­ri­ty comes to the con­clu­si­on that the noti­fi­ed body no lon­ger meets the requi­re­ments laid down in Artic­le 33 or that it is fai­ling to ful­fil its obli­ga­ti­ons, it shall rest­rict, sus­pend or with­draw noti­fi­ca­ti­on as appro­pria­te, depen­ding on the serious­ness of the fail­ure to meet tho­se requi­re­ments or ful­fil tho­se obli­ga­ti­ons. It shall imme­dia­te­ly inform the Com­mis­si­on and the other Mem­ber Sta­tes accor­din­gly. 2a. Whe­re its desi­gna­ti­on has been sus­pen­ded, rest­ric­ted, or ful­ly or par­ti­al­ly with­drawn, the noti­fi­ed body shall inform the manu­fac­tu­r­ers con­cer­ned at the latest within 10 days. 2b. In the event of rest­ric­tion, sus­pen­si­on or with­dra­wal of a noti­fi­ca­ti­on, the noti­fy­ing aut­ho­ri­ty shall take appro­pria­te steps to ensu­re that the files of the noti­fi­ed body con­cer­ned are kept and make them available to noti­fy­ing aut­ho­ri­ties in other Mem­ber Sta­tes and to mar­ket sur­veil­lan­ce aut­ho­ri­ties at their request. 2c. In the event of rest­ric­tion, sus­pen­si­on or with­dra­wal of a desi­gna­ti­on, the noti­fy­ing aut­ho­ri­ty shall: (a) assess the impact on the cer­ti­fi­ca­tes issued by the noti­fi­ed body; (b) sub­mit a report on its fin­dings to the Com­mis­si­on and the other Mem­ber Sta­tes within three months of having noti­fi­ed the chan­ges to the noti­fi­ca­ti­on; (c) requi­re the noti­fi­ed body to sus­pend or with­draw, within a rea­sonable peri­od of time deter­mi­ned by the aut­ho­ri­ty, any cer­ti­fi­ca­tes which were undu­ly issued in order to ensu­re the con­for­mi­ty of AI systems on the mar­ket; (d) inform the Com­mis­si­on and the Mem­ber Sta­tes about cer­ti­fi­ca­tes of which it has requi­red their sus­pen­si­on or with­dra­wal; (e) pro­vi­de the natio­nal com­pe­tent aut­ho­ri­ties of the Mem­ber Sta­te in which the pro­vi­der has its regi­stered place of busi­ness with all rele­vant infor­ma­ti­on about the cer­ti­fi­ca­tes for which it has requi­red sus­pen­si­on or with­dra­wal. That com­pe­tent aut­ho­ri­ty shall take the appro­pria­te mea­su­res, whe­re neces­sa­ry, to avo­id a poten­ti­al risk to health, safe­ty or fun­da­men­tal rights. 2d. With the excep­ti­on of cer­ti­fi­ca­tes undu­ly issued, and whe­re a noti­fi­ca­ti­on has been sus­pen­ded or rest­ric­ted, the cer­ti­fi­ca­tes shall remain valid in the fol­lo­wing cir­cum­stances: (a) the noti­fy­ing aut­ho­ri­ty has con­firm­ed, within one month of the sus­pen­si­on or rest­ric­tion, that the­re is no risk to health, safe­ty or fun­da­men­tal rights in rela­ti­on to cer­ti­fi­ca­tes affec­ted by the sus­pen­si­on or rest­ric­tion, and the noti­fy­ing aut­ho­ri­ty has out­lined a time­line and actions anti­ci­pa­ted to reme­dy the sus­pen­si­on or rest­ric­tion; or (b) the noti­fy­ing aut­ho­ri­ty has con­firm­ed that no cer­ti­fi­ca­tes rele­vant to the sus­pen­si­on will be issued, amen­ded or re-issued during the cour­se of the sus­pen­si­on or rest­ric­tion, and sta­tes whe­ther the noti­fi­ed body has the capa­bi­li­ty of con­ti­nuing to moni­tor and remain respon­si­ble for exi­sting cer­ti­fi­ca­tes issued for the peri­od of the sus­pen­si­on or rest­ric­tion. In the event that the aut­ho­ri­ty respon­si­ble for noti­fi­ed bodies deter­mi­nes that the noti­fi­ed body does not have the capa­bi­li­ty to sup­port exi­sting cer­ti­fi­ca­tes issued, the pro­vi­der shall pro­vi­de to the natio­nal com­pe­tent aut­ho­ri­ties of the Mem­ber Sta­te in which the pro­vi­der of the system cover­ed by the cer­ti­fi­ca­te has its regi­stered place of busi­ness, within three months of the sus­pen­si­on or rest­ric­tion, a writ­ten con­fir­ma­ti­on that ano­ther qua­li­fi­ed noti­fi­ed body is tem­po­r­a­ri­ly assum­ing the func­tions of the noti­fi­ed body to moni­tor and remain respon­si­ble for the cer­ti­fi­ca­tes during the peri­od of sus­pen­si­on or rest­ric­tion. 2e. With the excep­ti­on of cer­ti­fi­ca­tes undu­ly issued, and whe­re a desi­gna­ti­on has been with­drawn, the cer­ti­fi­ca­tes shall remain valid for a peri­od of nine months in the fol­lo­wing cir­cum­stances: (a) whe­re the natio­nal com­pe­tent aut­ho­ri­ty of the Mem­ber Sta­te in which the pro­vi­der of the AI system cover­ed by the cer­ti­fi­ca­te has its regi­stered place of busi­ness has con­firm­ed that the­re is no risk to health, safe­ty and fun­da­men­tal rights asso­cia­ted with the systems in que­sti­on; and (b) ano­ther noti­fi­ed body has con­firm­ed in wri­ting that it will assu­me imme­dia­te respon­si­bi­li­ties for tho­se systems and will have com­ple­ted assess­ment of them within twel­ve months of the with­dra­wal of the desi­gna­ti­on. In the cir­cum­stances refer­red to in the first sub­pa­ra­graph, the natio­nal com­pe­tent aut­ho­ri­ty of the Mem­ber Sta­te in which the pro­vi­der of the system cover­ed by the cer­ti­fi­ca­te has its place of busi­ness may extend the pro­vi­sio­nal vali­di­ty of the cer­ti­fi­ca­tes for fur­ther peri­ods of three months, which altog­e­ther shall not exce­ed twel­ve months. 2f. The natio­nal com­pe­tent aut­ho­ri­ty or the noti­fi­ed body assum­ing the func­tions of the noti­fi­ed body affec­ted by the chan­ge of noti­fi­ca­ti­on shall imme­dia­te­ly inform the Com­mis­si­on, the other Mem­ber Sta­tes and the other noti­fi­ed bodies thereof. 

Artic­le 37 – Chall­enge to the com­pe­tence of noti­fi­ed bodies

1. The Com­mis­si­on shall, whe­re neces­sa­ry, inve­sti­ga­te all cases whe­re the­re are rea­sons to doubt the com­pe­tence of a noti­fi­ed body or the con­tin­ued ful­film­ent by a noti­fi­ed body of the requi­re­ments laid down in Artic­le 33 and their appli­ca­ble respon­si­bi­li­ties. 2. The Noti­fy­ing aut­ho­ri­ty shall pro­vi­de the Com­mis­si­on, on request, with all rele­vant infor­ma­ti­on rela­ting to the noti­fi­ca­ti­on or the main­ten­an­ce of the com­pe­tence of the noti­fi­ed body con­cer­ned. 3. The Com­mis­si­on shall ensu­re that all sen­si­ti­ve infor­ma­ti­on obtai­ned in the cour­se of its inve­sti­ga­ti­ons pur­su­ant to this Artic­le is trea­ted con­fi­den­ti­al­ly in accordance with Artic­le 70. 4. Whe­re the Com­mis­si­on ascer­ta­ins that a noti­fi­ed body does not meet or no lon­ger meets the requi­re­ments for its noti­fi­ca­ti­on, it shall inform the noti­fy­ing Mem­ber Sta­te accor­din­gly and request it to take the neces­sa­ry cor­rec­ti­ve mea­su­res, inclu­ding sus­pen­si­on or with­dra­wal of the noti­fi­ca­ti­on if neces­sa­ry. Whe­re the Mem­ber Sta­te fails to take the neces­sa­ry cor­rec­ti­ve mea­su­res, the Com­mis­si­on may, by means of imple­men­ting acts, sus­pend, rest­rict or with­draw the desi­gna­ti­on. That imple­men­ting act shall be adopted in accordance with the exami­na­ti­on pro­ce­du­re refer­red to in Artic­le 74(2).

Artic­le 38 – Coor­di­na­ti­on of noti­fi­ed bodies

1. The Com­mis­si­on shall ensu­re that, with regard to high-risk AI systems, appro­pria­te coor­di­na­ti­on and coope­ra­ti­on bet­ween noti­fi­ed bodies acti­ve in the con­for­mi­ty assess­ment pro­ce­du­res pur­su­ant to this Regu­la­ti­on are put in place and pro­per­ly ope­ra­ted in the form of a sec­to­ral group of noti­fi­ed bodies. 2. The noti­fy­ing aut­ho­ri­ty shall ensu­re that the bodies noti­fi­ed by them par­ti­ci­pa­te in the work of that group, direct­ly or by means of desi­gna­ted repre­sen­ta­ti­ves. 2a. The Com­mis­si­on shall pro­vi­de for the exch­an­ge of know­ledge and best prac­ti­ces bet­ween the Mem­ber Sta­tes’ noti­fy­ing authorities. 

Artic­le 39 – Con­for­mi­ty assess­ment bodies of third countries

Con­for­mi­ty assess­ment bodies estab­lished under the law of a third coun­try with which the Uni­on has con­clu­ded an agree­ment may be aut­ho­ri­sed to car­ry out the acti­vi­ties of noti­fi­ed Bodies under this Regu­la­ti­on, pro­vi­ded that they meet the requi­re­ments in Artic­le 33 or they ensu­re an equi­va­lent level of compliance. 

Chap­ter 5 STANDARDS, CONFORMITY ASSESSMENT, CERTIFICATES, REGISTRATION

Artic­le 40 – Har­mo­ni­s­ed stan­dards and stan­dar­di­sati­on deliverables

1. High-risk AI systems or gene­ral pur­po­se AI models which are in con­for­mi­ty with har­mo­ni­s­ed stan­dards or parts the­reof the refe­ren­ces of which have been published in the Offi­ci­al Jour­nal of the Euro­pean Uni­on in accordance with Regu­la­ti­on (EU) 1025/2012 shall be pre­su­med to be in con­for­mi­ty with the requi­re­ments set out in Chap­ter 2 of this Tit­le or, as appli­ca­ble, with the requi­re­ments set out in [Chap­ter on GPAI], to the ext­ent tho­se stan­dards cover tho­se requi­re­ments. 2. The Com­mis­si­on shall issue stan­dar­di­sati­on requests cove­ring all requi­re­ments of Tit­le II Chap­ter III and as appli­ca­ble [GPAI Chap­ter] of this Regu­la­ti­on, in accordance with Artic­le 10 of Regu­la­ti­on EU (No)1025/2012 wit­hout undue delay. The stan­dar­di­sati­on request shall also ask for deli­ver­a­bles on report­ing and docu­men­ta­ti­on pro­ce­s­ses to impro­ve AI systems resour­ce per­for­mance, such as reduc­tion of ener­gy and other resour­ces con­sump­ti­on of the high-risk AI system during its life­cy­cle, and on ener­gy effi­ci­ent deve­lo­p­ment of gene­ral-pur­po­se AI models. When pre­pa­ring stan­dar­di­sati­on request, the Com­mis­si­on shall con­sult the Board and rele­vant stake­hol­ders, inclu­ding the Advi­so­ry Forum. When issuing a stan­dar­di­sati­on request to Euro­pean stan­dar­di­sati­on orga­ni­sa­ti­ons, the Com­mis­si­on shall spe­ci­fy that stan­dards have to be con­si­stent, inclu­ding with the exi­sting and future stan­dards deve­lo­ped in the various sec­tors for pro­ducts cover­ed by the exi­sting Uni­on safe­ty legis­la­ti­on listed in Annex II, clear and aimed at ensu­ring that AI systems or models pla­ced on the mar­ket or put into ser­vice in the Uni­on meet the rele­vant requi­re­ments laid down in this Regu­la­ti­on. The Com­mis­si­on shall request the Euro­pean stan­dar­di­sati­on orga­ni­sa­ti­ons to pro­vi­de evi­dence of their best efforts to ful­fil the abo­ve objec­ti­ves in accordance with Artic­le 24 of Regu­la­ti­on EU 1025/2012. 1c The actors invol­ved in the stan­dar­di­sati­on pro­cess shall seek to pro­mo­te invest­ment and inno­va­ti­on in AI, inclu­ding through incre­a­sing legal cer­tain­ty, as well as com­pe­ti­ti­ve­ness and growth of the Uni­on mar­ket, and con­tri­bu­te to streng­thening glo­bal coope­ra­ti­on on stan­dar­di­sati­on and taking into account exi­sting inter­na­tio­nal stan­dards in the field of AI that are con­si­stent with Uni­on values, fun­da­men­tal rights and inte­rests, and enhan­ce mul­ti- stake­hol­der gover­nan­ce ensu­ring a balan­ced repre­sen­ta­ti­on of inte­rests and effec­ti­ve par­ti­ci­pa­ti­on of all rele­vant stake­hol­ders in accordance with Artic­les 5, 6, and 7 of Regu­la­ti­on (EU) No 1025/2012

Artic­le 41 – Com­mon specifications

1. The Com­mis­si­on is empowered to adopt, after con­sul­ting the Advi­so­ry Forum refer­red to in Artic­le 58a, imple­men­ting acts in accordance with the exami­na­ti­on pro­ce­du­re refer­red to in Artic­le 74(2) estab­li­shing com­mon spe­ci­fi­ca­ti­ons for the requi­re­ments set out in Chap­ter 2 of this Tit­le or, as appli­ca­ble, with requi­re­ments set out in Artic­le [GPAI Chap­ter], for AI systems within the scope of this Regu­la­ti­on, whe­re the fol­lo­wing con­di­ti­ons have been ful­fil­led: (a) the Com­mis­si­on has reque­sted, pur­su­ant to Artic­le 10(1) of Regu­la­ti­on 1025/2012, one or more Euro­pean stan­dar­di­sati­on orga­ni­sa­ti­ons to draft a har­mo­ni­s­ed stan­dard for the requi­re­ments set out in Chap­ter 2 of this Tit­le; and (i) the request has not been accept­ed by any of the Euro­pean stan­dar­di­sati­on orga­ni­sa­ti­ons; or (ii) the har­mo­ni­s­ed stan­dards addres­sing that request are not deli­ver­ed within the dead­line set in accordance with artic­le 10(1) of Regu­la­ti­on 1025/2012; or (iii) the rele­vant har­mo­ni­s­ed stan­dards insuf­fi­ci­ent­ly address fun­da­men­tal rights con­cerns; or (iv) the har­mo­ni­s­ed stan­dards do not com­ply with the request; and (b) no refe­rence to har­mo­ni­s­ed stan­dards cove­ring the requi­re­ments refer­red to in Chap­ter II of this Tit­le has been published in the Offi­ci­al Jour­nal of the Euro­pean Uni­on, in accordance with Regu­la­ti­on (EU) No 1025/2012, and no such refe­rence is expec­ted to be published within a rea­sonable peri­od. 1a. Befo­re pre­pa­ring a draft imple­men­ting act, the Com­mis­si­on shall inform the com­mit­tee refer­red to in Artic­le 22 of Regu­la­ti­on EU (No) 1025/2012 that it con­siders that the con­di­ti­ons in para­graph 1 are ful­fil­led. 3. High-risk AI systems which are in con­for­mi­ty with the com­mon spe­ci­fi­ca­ti­ons refer­red to in para­graph 1, or parts the­reof, shall be pre­su­med to be in con­for­mi­ty with the requi­re­ments set out in Chap­ter 2 of this Tit­le, to the ext­ent tho­se com­mon spe­ci­fi­ca­ti­ons cover tho­se requi­re­ments. 3a. Whe­re a har­mo­ni­s­ed stan­dard is adopted by a Euro­pean stan­dar­di­sati­on orga­ni­sa­ti­on and pro­po­sed to the Com­mis­si­on for the publi­ca­ti­on of its refe­rence in the Offi­ci­al Jour­nal of the Euro­pean Uni­on, the Com­mis­si­on shall assess the har­mo­ni­s­ed stan­dard in accordance with Regu­la­ti­on (EU) No 1025/2012. When refe­rence of a har­mo­ni­s­ed stan­dard is published in the Offi­ci­al Jour­nal of the Euro­pean Uni­on, the Com­mis­si­on shall repeal acts refer­red to in para­graph 1 and 1b, or parts the­reof which cover the same requi­re­ments set out in Chap­ter 2 of this Tit­le. 4. Whe­re pro­vi­ders of high-risk AI systems do not com­ply with the com­mon spe­ci­fi­ca­ti­ons refer­red to in para­graph 1, they shall duly justi­fy that they have adopted tech­ni­cal solu­ti­ons that meet the requi­re­ments refer­red to in Chap­ter II to a level at least equi­va­lent the­re­to. 4b. When a Mem­ber Sta­te con­siders that a com­mon spe­ci­fi­ca­ti­on does not enti­re­ly satis­fy the requi­re­ments set out in Chap­ter 2 of this Tit­le, it shall inform the Com­mis­si­on the­reof with a detail­ed expl­ana­ti­on and the Com­mis­si­on shall assess that infor­ma­ti­on and, if appro­pria­te, amend the imple­men­ting act estab­li­shing the com­mon spe­ci­fi­ca­ti­on in question. 

Artic­le 42 – Pre­sump­ti­on of con­for­mi­ty with cer­tain requirements

1. High-risk AI systems that have been trai­ned and tested on data reflec­ting the spe­ci­fic geo­gra­phi­cal, beha­viou­ral, con­tex­tu­al or func­tion­al set­ting within which they are inten­ded to be used shall be pre­su­med to be in com­pli­ance with the respec­ti­ve requi­re­ments set out in Artic­le 10(4). 2. High-risk AI systems that have been cer­ti­fi­ed or for which a state­ment of con­for­mi­ty has been issued under a cyber­se­cu­ri­ty sche­me pur­su­ant to Regu­la­ti­on (EU) 2019/881 of the Euro­pean Par­lia­ment and of the Council1 and the refe­ren­ces of which have been published in the Offi­ci­al Jour­nal of the Euro­pean Uni­on shall be pre­su­med to be in com­pli­ance with the cyber­se­cu­ri­ty requi­re­ments set out in Artic­le 15 of this Regu­la­ti­on in so far as the cyber­se­cu­ri­ty cer­ti­fi­ca­te or state­ment of con­for­mi­ty or parts the­reof cover tho­se requirements. 

Artic­le 43 – Con­for­mi­ty assessment

1. For high-risk AI systems listed in point 1 of Annex III, whe­re, in demon­st­ra­ting the com­pli­ance of a high-risk AI system with the requi­re­ments set out in Chap­ter 2 of this Tit­le, the pro­vi­der has applied har­mo­ni­s­ed stan­dards refer­red to in Artic­le 40, or, whe­re appli­ca­ble, com­mon spe­ci­fi­ca­ti­ons refer­red to in Artic­le 41, the pro­vi­der shall opt for one of the fol­lo­wing pro­ce­du­res: (a) the con­for­mi­ty assess­ment pro­ce­du­re based on inter­nal con­trol refer­red to in Annex VI; or (b) the con­for­mi­ty assess­ment pro­ce­du­re based on assess­ment of the qua­li­ty manage­ment system and assess­ment of the tech­ni­cal docu­men­ta­ti­on, with the invol­vement of a noti­fi­ed body, refer­red to in Annex VII. In demon­st­ra­ting the com­pli­ance of a high-risk AI system with the requi­re­ments set out in Chap­ter 2 of this Tit­le, the pro­vi­der shall fol­low the con­for­mi­ty assess­ment pro­ce­du­re set out in Annex VII in the fol­lo­wing cases: (a) whe­re har­mo­ni­s­ed stan­dards refer­red to in Artic­le 40, do not exist and com­mon spe­ci­fi­ca­ti­ons refer­red to in Artic­le 41 are not available; (aa) the pro­vi­der has not applied or has applied only in part the har­mo­ni­s­ed stan­dard; (b) whe­re the com­mon spe­ci­fi­ca­ti­ons refer­red to in point (a) exist but the pro­vi­der has not applied them; (c) whe­re one or more of the har­mo­ni­s­ed stan­dards refer­red to in point (a) has been published with a rest­ric­tion and only on the part of the stan­dard that was rest­ric­ted. For the pur­po­se of the con­for­mi­ty assess­ment pro­ce­du­re refer­red to in Annex VII, the pro­vi­der may choo­se any of the noti­fi­ed bodies. Howe­ver, when the system is inten­ded to be put into ser­vice by law enforce­ment, immi­gra­ti­on or asyl­um aut­ho­ri­ties as well as EU insti­tu­ti­ons, bodies or agen­ci­es, the mar­ket sur­veil­lan­ce aut­ho­ri­ty refer­red to in Artic­le 63(5) or (6), as appli­ca­ble, shall act as a noti­fi­ed body. 2. For high-risk AI systems refer­red to in points 2 to 8 of Annex III pro­vi­ders shall fol­low the con­for­mi­ty assess­ment pro­ce­du­re based on inter­nal con­trol as refer­red to in Annex VI, which does not pro­vi­de for the invol­vement of a noti­fi­ed body. 3. For high-risk AI systems, to which legal acts listed in Annex II, sec­tion A, app­ly, the pro­vi­der shall fol­low the rele­vant con­for­mi­ty assess­ment as requi­red under tho­se legal acts. The requi­re­ments set out in Chap­ter 2 of this Tit­le shall app­ly to tho­se high-risk AI systems and shall be part of that assess­ment. Points 4.3., 4.4., 4.5. and the fifth para­graph of point 4. 6 of Annex VII shall also app­ly. For the pur­po­se of that assess­ment, noti­fi­ed bodies which have been noti­fi­ed under tho­se legal acts shall be entit­led to con­trol the con­for­mi­ty of the high-risk AI systems with the requi­re­ments set out in Chap­ter 2 of this Tit­le, pro­vi­ded that the com­pli­ance of tho­se noti­fi­ed bodies with requi­re­ments laid down in Artic­le 33(4), (9) and (10) has been asses­sed in the con­text of the noti­fi­ca­ti­on pro­ce­du­re under tho­se legal acts. Whe­re the legal acts listed in Annex II, sec­tion A, enable the manu­fac­tu­rer of the pro­duct to opt out from a third-par­ty con­for­mi­ty assess­ment, pro­vi­ded that that manu­fac­tu­rer has applied all har­mo­ni­s­ed stan­dards cove­ring all the rele­vant requi­re­ments, that manu­fac­tu­rer may make use of that opti­on only if he has also applied har­mo­ni­s­ed stan­dards or, whe­re appli­ca­ble, com­mon spe­ci­fi­ca­ti­ons refer­red to in Artic­le 41, cove­ring the requi­re­ments set out in Chap­ter 2 of this Tit­le. 4. High-risk AI systems that have alre­a­dy been sub­ject to a con­for­mi­ty assess­ment pro­ce­du­re shall under­go a new con­for­mi­ty assess­ment pro­ce­du­re when­ever they are sub­stan­ti­al­ly modi­fi­ed, regard­less of whe­ther the modi­fi­ed system is inten­ded to be fur­ther dis­tri­bu­ted or con­ti­nues to be used by the cur­rent deployer. For high-risk AI systems that con­ti­n­ue to learn after being pla­ced on the mar­ket or put into ser­vice, chan­ges to the high-risk AI system and its per­for­mance that have been pre- deter­mi­ned by the pro­vi­der at the moment of the initi­al con­for­mi­ty assess­ment and are part of the infor­ma­ti­on con­tai­ned in the tech­ni­cal docu­men­ta­ti­on refer­red to in point 2(f) of Annex IV, shall not con­sti­tu­te a sub­stan­ti­al modi­fi­ca­ti­on. 5. The Com­mis­si­on is empowered to adopt dele­ga­ted acts in accordance with Artic­le 73 for the pur­po­se of updating Anne­xes VI and Annex VII in light of tech­ni­cal pro­gress. 6. The Com­mis­si­on is empowered to adopt dele­ga­ted acts to amend para­graphs 1 and 2 in order to sub­ject high-risk AI systems refer­red to in points 2 to 8 of Annex III to the con­for­mi­ty assess­ment pro­ce­du­re refer­red to in Annex VII or parts the­reof. The Com­mis­si­on shall adopt such dele­ga­ted acts taking into account the effec­ti­ve­ness of the con­for­mi­ty assess­ment pro­ce­du­re based on inter­nal con­trol refer­red to in Annex VI in pre­ven­ting or mini­mi­zing the risks to health and safe­ty and pro­tec­tion of fun­da­men­tal rights posed by such systems as well as the avai­la­bi­li­ty of ade­qua­te capa­ci­ties and resour­ces among noti­fi­ed bodies. 

Artic­le 44 – Certificates

1. Cer­ti­fi­ca­tes issued by noti­fi­ed bodies in accordance with Annex VII shall be drawn-up in a lan­guage which can be easi­ly under­s­tood by the rele­vant aut­ho­ri­ties in the Mem­ber Sta­te in which the noti­fi­ed body is estab­lished. 2. Cer­ti­fi­ca­tes shall be valid for the peri­od they indi­ca­te, which shall not exce­ed five years for AI systems cover­ed by Annex II and four years for AI systems cover­ed by Annex III. On appli­ca­ti­on by the pro­vi­der, the vali­di­ty of a cer­ti­fi­ca­te may be exten­ded for fur­ther peri­ods, each not exce­e­ding five years for AI systems cover­ed by Annex II and four years for AI systems cover­ed by Annex III, based on a re-assess­ment in accordance with the appli­ca­ble con­for­mi­ty assess­ment pro­ce­du­res. Any sup­ple­ment to a cer­ti­fi­ca­te shall remain valid as long as the cer­ti­fi­ca­te which it sup­ple­ments is valid. 3. Whe­re a noti­fi­ed body finds that an AI system no lon­ger meets the requi­re­ments set out in Chap­ter 2 of this Tit­le, it shall, taking account of the prin­ci­ple of pro­por­tio­na­li­ty, sus­pend or with­draw the cer­ti­fi­ca­te issued or impo­se any rest­ric­tions on it, unless com­pli­ance with tho­se requi­re­ments is ensu­red by appro­pria­te cor­rec­ti­ve action taken by the pro­vi­der of the system within an appro­pria­te dead­line set by the noti­fi­ed body. The noti­fi­ed body shall give rea­sons for its decis­i­on. An appeal pro­ce­du­re against decis­i­ons of the noti­fi­ed bodies, inclu­ding on issued con­for­mi­ty cer­ti­fi­ca­tes, shall be available. 

Artic­le 46 – Infor­ma­ti­on obli­ga­ti­ons of noti­fi­ed bodies

1. Noti­fi­ed bodies shall inform the noti­fy­ing aut­ho­ri­ty of the fol­lo­wing: (a) any Uni­on tech­ni­cal docu­men­ta­ti­on assess­ment cer­ti­fi­ca­tes, any sup­ple­ments to tho­se cer­ti­fi­ca­tes, qua­li­ty manage­ment system appr­ovals issued in accordance with the requi­re­ments of Annex VII; (b) any refu­sal, rest­ric­tion, sus­pen­si­on or with­dra­wal of a Uni­on tech­ni­cal docu­men­ta­ti­on assess­ment cer­ti­fi­ca­te or a qua­li­ty manage­ment system appr­oval issued in accordance with the requi­re­ments of Annex VII; (c) any cir­cum­stances affec­ting the scope of or con­di­ti­ons for noti­fi­ca­ti­on; (d) any request for infor­ma­ti­on which they have recei­ved from mar­ket sur­veil­lan­ce aut­ho­ri­ties regar­ding con­for­mi­ty assess­ment acti­vi­ties; (e) on request, con­for­mi­ty assess­ment acti­vi­ties per­for­med within the scope of their noti­fi­ca­ti­on and any other acti­vi­ty per­for­med, inclu­ding cross-bor­der acti­vi­ties and sub­con­trac­ting. 2. Each noti­fi­ed body shall inform the other noti­fi­ed bodies of: (a) qua­li­ty manage­ment system appr­ovals which it has refu­sed, sus­pen­ded or with­drawn, and, upon request, of qua­li­ty system appr­ovals which it has issued; (b) EU tech­ni­cal docu­men­ta­ti­on assess­ment cer­ti­fi­ca­tes or any sup­ple­ments the­re­to which it has refu­sed, with­drawn, sus­pen­ded or other­wi­se rest­ric­ted, and, upon request, of the cer­ti­fi­ca­tes and/or sup­ple­ments the­re­to which it has issued. 3. Each noti­fi­ed body shall pro­vi­de the other noti­fi­ed bodies car­ry­ing out simi­lar con­for­mi­ty assess­ment acti­vi­ties cove­ring the same types of AI systems with rele­vant infor­ma­ti­on on issues rela­ting to nega­ti­ve and, on request, posi­ti­ve con­for­mi­ty assess­ment results. 3a. The obli­ga­ti­ons refer­red to in para­graphs 1 to 3 shall be com­plied with in accordance with Artic­le 70. 

Artic­le 47 – Dero­ga­ti­on from con­for­mi­ty assess­ment procedure

1. By way of dero­ga­ti­on from Artic­le 43 and upon a duly justi­fi­ed request, any mar­ket sur­veil­lan­ce aut­ho­ri­ty may aut­ho­ri­se the pla­cing on the mar­ket or put­ting into ser­vice of spe­ci­fic high-risk AI systems within the ter­ri­to­ry of the Mem­ber Sta­te con­cer­ned, for excep­tio­nal rea­sons of public secu­ri­ty or the pro­tec­tion of life and health of per­sons, envi­ron­men­tal pro­tec­tion and the pro­tec­tion of key indu­stri­al and infras­truc­tu­ral assets. That aut­ho­ri­sa­ti­on shall be for a limi­t­ed peri­od of time while the neces­sa­ry con­for­mi­ty assess­ment pro­ce­du­res are being car­ri­ed out, taking into account the excep­tio­nal rea­sons justi­fy­ing the dero­ga­ti­on. The com­ple­ti­on of tho­se pro­ce­du­res shall be under­ta­ken wit­hout undue delay. 1a. In a duly justi­fi­ed situa­ti­on of urgen­cy for excep­tio­nal rea­sons of public secu­ri­ty or in case of spe­ci­fic, sub­stan­ti­al and immi­nent thre­at to the life or phy­si­cal safe­ty of natu­ral per­sons, law enforce­ment aut­ho­ri­ties or civil pro­tec­tion aut­ho­ri­ties may put a spe­ci­fic high-risk AI system into ser­vice wit­hout the aut­ho­ri­sa­ti­on refer­red to in para­graph 1 pro­vi­ded that such aut­ho­ri­sa­ti­on is reque­sted during or after the use wit­hout undue delay, and if such aut­ho­ri­sa­ti­on is rejec­ted, its use shall be stop­ped with imme­dia­te effect and all the results and out­puts of this use shall be imme­dia­te­ly dis­card­ed. 2. The aut­ho­ri­sa­ti­on refer­red to in para­graph 1 shall be issued only if the mar­ket sur­veil­lan­ce aut­ho­ri­ty con­clu­des that the high-risk AI system com­plies with the requi­re­ments of Chap­ter 2 of this Tit­le. The mar­ket sur­veil­lan­ce aut­ho­ri­ty shall inform the Com­mis­si­on and the other Mem­ber Sta­tes of any aut­ho­ri­sa­ti­on issued pur­su­ant to para­graph 1. This obli­ga­ti­on shall not cover sen­si­ti­ve ope­ra­tio­nal data in rela­ti­on to the acti­vi­ties of law enforce­ment aut­ho­ri­ties. 3. Whe­re, within 15 calen­dar days of rece­ipt of the infor­ma­ti­on refer­red to in para­graph 2, no objec­tion has been rai­sed by eit­her a Mem­ber Sta­te or the Com­mis­si­on in respect of an aut­ho­ri­sa­ti­on issued by a mar­ket sur­veil­lan­ce aut­ho­ri­ty of a Mem­ber Sta­te in accordance with para­graph 1, that aut­ho­ri­sa­ti­on shall be dee­med justi­fi­ed. 4. Whe­re, within 15 calen­dar days of rece­ipt of the noti­fi­ca­ti­on refer­red to in para­graph 2, objec­tions are rai­sed by a Mem­ber Sta­te against an aut­ho­ri­sa­ti­on issued by a mar­ket sur­veil­lan­ce aut­ho­ri­ty of ano­ther Mem­ber Sta­te, or whe­re the Com­mis­si­on con­siders the aut­ho­ri­sa­ti­on to be con­tra­ry to Uni­on law or the con­clu­si­on of the Mem­ber Sta­tes regar­ding the com­pli­ance of the system as refer­red to in para­graph 2 to be unfoun­ded, the Com­mis­si­on shall wit­hout delay enter into con­sul­ta­ti­on with the rele­vant Mem­ber Sta­te; the operator(s) con­cer­ned shall be con­sul­ted and have the pos­si­bi­li­ty to pre­sent their views. In view the­reof, the Com­mis­si­on shall deci­de whe­ther the aut­ho­ri­sa­ti­on is justi­fi­ed or not. The Com­mis­si­on shall address its decis­i­on to the Mem­ber Sta­te con­cer­ned and the rele­vant ope­ra­tor or ope­ra­tors. 5. If the aut­ho­ri­sa­ti­on is con­side­red unju­sti­fi­ed, this shall be with­drawn by the mar­ket sur­veil­lan­ce aut­ho­ri­ty of the Mem­ber Sta­te con­cer­ned. 6. For high-risk AI systems rela­ted to pro­ducts cover­ed by Uni­on har­mo­ni­sa­ti­on legis­la­ti­on refer­red to in Annex II Sec­tion A, only the con­for­mi­ty assess­ment dero­ga­ti­on pro­ce­du­res estab­lished in that legis­la­ti­on shall apply. 

Artic­le 48 – EU decla­ra­ti­on of conformity

1. The pro­vi­der shall draw up a writ­ten machi­ne rea­da­ble, phy­si­cal or elec­tro­ni­cal­ly signed EU decla­ra­ti­on of con­for­mi­ty for each high-risk AI system and keep it at the dis­po­sal of the natio­nal com­pe­tent aut­ho­ri­ties for 10 years after the AI high-risk system has been pla­ced on the mar­ket or put into ser­vice. The EU decla­ra­ti­on of con­for­mi­ty shall iden­ti­fy the high-risk AI system for which it has been drawn up. A copy of the EU decla­ra­ti­on of con­for­mi­ty shall be sub­mit­ted to the rele­vant natio­nal com­pe­tent aut­ho­ri­ties upon request. 2. The EU decla­ra­ti­on of con­for­mi­ty shall sta­te that the high-risk AI system in que­sti­on meets the requi­re­ments set out in Chap­ter 2 of this Tit­le. The EU decla­ra­ti­on of con­for­mi­ty shall con­tain the infor­ma­ti­on set out in Annex V and shall be trans­la­ted into a lan­guage that can be easi­ly under­s­tood by the natio­nal com­pe­tent aut­ho­ri­ties of the Mem­ber State(s) in which the high-risk AI system is pla­ced on the mar­ket or made available. 3. Whe­re high-risk AI systems are sub­ject to other Uni­on har­mo­ni­sa­ti­on legis­la­ti­on which also requi­res an EU decla­ra­ti­on of con­for­mi­ty, a sin­gle EU decla­ra­ti­on of con­for­mi­ty shall be drawn up in respect of all Uni­on legis­la­ti­ons appli­ca­ble to the high-risk AI system. The decla­ra­ti­on shall con­tain all the infor­ma­ti­on requi­red for iden­ti­fi­ca­ti­on of the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on to which the decla­ra­ti­on rela­tes. 4. By dra­wing up the EU decla­ra­ti­on of con­for­mi­ty, the pro­vi­der shall assu­me respon­si­bi­li­ty for com­pli­ance with the requi­re­ments set out in Chap­ter 2 of this Tit­le. The pro­vi­der shall keep the EU decla­ra­ti­on of con­for­mi­ty up-to-date as appro­pria­te. 5. The Com­mis­si­on shall be empowered to adopt dele­ga­ted acts in accordance with Artic­le 73 for the pur­po­se of updating the con­tent of the EU decla­ra­ti­on of con­for­mi­ty set out in Annex V in order to intro­du­ce ele­ments that beco­me neces­sa­ry in light of tech­ni­cal progress. 

Artic­le 49 – CE mar­king of conformity

1. The CE mar­king of con­for­mi­ty shall be sub­ject to the gene­ral prin­ci­ples set out in Artic­le 30 of Regu­la­ti­on (EC) No 765/2008. 1a. For high-risk AI systems pro­vi­ded digi­tal­ly, a digi­tal CE mar­king shall be used, only if it can be easi­ly acce­s­sed via the inter­face from which the AI system is acce­s­sed or via an easi­ly acce­s­si­ble machi­ne-rea­da­ble code or other elec­tro­nic means. 2. The CE mar­king shall be affi­xed visi­bly, legi­bly and inde­li­bly for high-risk AI systems. Whe­re that is not pos­si­ble or not war­ran­ted on account of the natu­re of the high-risk AI system, it shall be affi­xed to the pack­a­ging or to the accom­pany­ing docu­men­ta­ti­on, as appro­pria­te. 3. Whe­re appli­ca­ble, the CE mar­king shall be fol­lo­wed by the iden­ti­fi­ca­ti­on num­ber of the noti­fi­ed body respon­si­ble for the con­for­mi­ty assess­ment pro­ce­du­res set out in Artic­le 43. The iden­ti­fi­ca­ti­on num­ber of the noti­fi­ed body shall be affi­xed by the body its­elf or, under its ins­truc­tions, by the pro­vi­der or by its aut­ho­ri­sed repre­sen­ta­ti­ve. The iden­ti­fi­ca­ti­on num­ber shall also be indi­ca­ted in any pro­mo­tio­nal mate­ri­al which men­ti­ons that the high- risk AI system ful­fils the requi­re­ments for CE mar­king. 3a. Whe­re high-risk AI systems are sub­ject to other Uni­on law which also pro­vi­des for the affixing of the CE mar­king, the CE mar­king shall indi­ca­te that the high-risk AI system also ful­fil the requi­re­ments of that other law. 

Artic­le 51 – Registration

1. Befo­re pla­cing on the mar­ket or put­ting into ser­vice a high-risk AI system listed in Annex III, with the excep­ti­on of high risk AI systems refer­red to in Annex III point 2, the pro­vi­der or, whe­re appli­ca­ble, the aut­ho­ri­sed repre­sen­ta­ti­ve shall regi­ster them­sel­ves and their system in the EU data­ba­se refer­red to in Artic­le 60. 1a. Befo­re pla­cing on the mar­ket or put­ting into ser­vice an AI system for which the pro­vi­der has con­clu­ded that it is not high-risk in appli­ca­ti­on of the pro­ce­du­re under Artic­le 6(2a), the pro­vi­der or, whe­re appli­ca­ble, the aut­ho­ri­sed repre­sen­ta­ti­ve shall regi­ster them­sel­ves and that system in the EU data­ba­se refer­red to in Artic­le 60. 1b. Befo­re put­ting into ser­vice or using a high-risk AI system listed in Annex III, with the excep­ti­on of high-risk AI systems listed in Annex III, point 2, deployers who are public aut­ho­ri­ties, agen­ci­es or bodies or per­sons acting on their behalf shall regi­ster them­sel­ves, sel­ect the system and regi­ster its use in the EU data­ba­se refer­red to in Artic­le 60. 1c. For high-risk AI systems refer­red to Annex III, points 1, 6 and 7 in the are­as of law enforce­ment, migra­ti­on, asyl­um and bor­der con­trol manage­ment, the regi­stra­ti­on refer­red to in para­graphs 1 to 1b shall be done in a secu­re non-public sec­tion of the EU data­ba­se refer­red to in Artic­le 60 and include only the fol­lo­wing infor­ma­ti­on, as appli­ca­ble: – points 1 to 9 of Annex VIII, sec­tion A with the excep­ti­on of points 5a, 7 and 8; – points 1 to 3 of Annex VIII, sec­tion B; – points 1 to 9 of Annex VIII, sec­tion X with the excep­ti­on of points 6 and 7; – points 1 to 5 of Annex VII­Ia with the excep­ti­on of point 4. Only the Com­mis­si­on and natio­nal aut­ho­ri­ties refer­red to in Art. 63(5) shall have access to the­se rest­ric­ted sec­tions of the EU data­ba­se. 1d. High risk AI systems refer­red to in Annex III, point 2 shall be regi­stered at natio­nal level. 

TITLE IV TRANSPARENCY OBLIGATIONS FOR PROVIDERS AND DEPLOYERS OF CERTAIN AI SYSTEMS

Artic­le 52 – Trans­pa­ren­cy obli­ga­ti­ons for pro­vi­ders and users of cer­tain AI systems and GPAI models

1. Pro­vi­ders shall ensu­re that AI systems inten­ded to direct­ly inter­act with natu­ral per­sons are desi­gned and deve­lo­ped in such a way that the con­cer­ned natu­ral per­sons are infor­med that they are inter­ac­ting with an AI system, unless this is obvious from the point of view of a natu­ral per­son who is rea­son­ab­ly well-infor­med, obser­vant and cir­cum­spect, taking into account the cir­cum­stances and the con­text of use. This obli­ga­ti­on shall not app­ly to AI systems aut­ho­ri­sed by law to detect, pre­vent, inve­sti­ga­te and pro­se­cu­te cri­mi­nal offen­ces, sub­ject to appro­pria­te safe­guards for the rights and free­doms of third par­ties unless tho­se systems are available for the public to report a cri­mi­nal offence. 1a. Pro­vi­ders of AI systems, inclu­ding GPAI systems, gene­ra­ting syn­the­tic audio, image, video or text con­tent, shall ensu­re the out­puts of the AI system are mark­ed in a machi­ne- rea­da­ble for­mat and detec­ta­ble as arti­fi­ci­al­ly gene­ra­ted or mani­pu­la­ted. Pro­vi­ders shall ensu­re their tech­ni­cal solu­ti­ons are effec­ti­ve, inter­ope­ra­ble, robust and relia­ble as far as this is tech­ni­cal­ly fea­si­ble, taking into account spe­ci­fi­ci­ties and limi­ta­ti­ons of dif­fe­rent types of con­tent, costs of imple­men­ta­ti­on and the gene­ral­ly ack­now­led­ged sta­te-of-the-art, as may be reflec­ted in rele­vant tech­ni­cal stan­dards. This obli­ga­ti­on shall not app­ly to the ext­ent the AI systems per­form an assi­sti­ve func­tion for stan­dard editing or do not sub­stan­ti­al­ly alter the input data pro­vi­ded by the deployer or the seman­tics the­reof, or whe­re aut­ho­ri­sed by law to detect, pre­vent, inve­sti­ga­te and pro­se­cu­te cri­mi­nal offen­ces. 2. Deployers of an emo­ti­on reco­gni­ti­on system or a bio­me­tric cate­go­ri­sa­ti­on system shall inform of the ope­ra­ti­on of the system the natu­ral per­sons expo­sed the­re­to and pro­cess the per­so­nal data in accordance with Regu­la­ti­on (EU) 2016/679, Regu­la­ti­on (EU) 2016/1725 and Direc­ti­ve (EU) 2016/280, as appli­ca­ble. This obli­ga­ti­on shall not app­ly to AI systems used for bio­me­tric cate­go­rizati­on and emo­ti­on reco­gni­ti­on, which are per­mit­ted by law to detect, pre­vent and inve­sti­ga­te cri­mi­nal offen­ces, sub­ject to appro­pria­te safe­guards for the rights and free­doms of third par­ties, and in com­pli­ance with Uni­on law. 3. Deployers of an AI system that gene­ra­tes or mani­pu­la­tes image, audio or video con­tent con­sti­tu­ting a deep fake, shall dis­c­lo­se that the con­tent has been arti­fi­ci­al­ly gene­ra­ted or mani­pu­la­ted. This obli­ga­ti­on shall not app­ly whe­re the use is aut­ho­ri­sed by law to detect, pre­vent, inve­sti­ga­te and pro­se­cu­te cri­mi­nal offence. Whe­re the con­tent forms part of an evi­dent­ly artis­tic, crea­ti­ve, sati­ri­cal, fic­tion­al ana­log­ous work or pro­gram­me, the trans­pa­ren­cy obli­ga­ti­ons set out in this para­graph are limi­t­ed to dis­clo­sure of the exi­stence of such gene­ra­ted or mani­pu­la­ted con­tent in an appro­pria­te man­ner that does not ham­per the dis­play or enjoy­ment of the work. Deployers of an AI system that gene­ra­tes or mani­pu­la­tes text which is published with the pur­po­se of informing the public on mat­ters of public inte­rest shall dis­c­lo­se that the text has been arti­fi­ci­al­ly gene­ra­ted or mani­pu­la­ted. This obli­ga­ti­on shall not app­ly whe­re the use is aut­ho­ri­sed by law to detect, pre­vent, inve­sti­ga­te and pro­se­cu­te cri­mi­nal offen­ces or whe­re the AI-gene­ra­ted con­tent has under­go­ne a pro­cess of human review or edi­to­ri­al con­trol and whe­re a natu­ral or legal per­son holds edi­to­ri­al respon­si­bi­li­ty for the publi­ca­ti­on of the con­tent. 3a. The infor­ma­ti­on refer­red to in para­graphs 1 to 3 shall be pro­vi­ded to the con­cer­ned natu­ral per­sons in a clear and distin­gu­is­ha­ble man­ner at the latest at the time of the first inter­ac­tion or expo­sure. The infor­ma­ti­on shall respect the appli­ca­ble acce­s­si­bi­li­ty requi­re­ments. 4. Para­graphs 1, 2 and 3 shall not affect the requi­re­ments and obli­ga­ti­ons set out in Tit­le III of this Regu­la­ti­on and shall be wit­hout pre­ju­di­ce to other trans­pa­ren­cy obli­ga­ti­ons for users of AI systems laid down in Uni­on or natio­nal law. 4a. The AI Office shall encou­ra­ge and faci­li­ta­te the dra­wing up of codes of prac­ti­ce at Uni­on level to faci­li­ta­te the effec­ti­ve imple­men­ta­ti­on of the obli­ga­ti­ons regar­ding the detec­tion and label­ling of arti­fi­ci­al­ly gene­ra­ted or mani­pu­la­ted con­tent. The Com­mis­si­on is empowered to adopt imple­men­ting acts to appro­ve the­se codes of prac­ti­ce in accordance with the pro­ce­du­re laid down in Artic­le 52e para­graphs 6 – 8. If it deems the code is not ade­qua­te, the Com­mis­si­on is empowered to adopt an imple­men­ting act spe­ci­fy­ing the com­mon rules for the imple­men­ta­ti­on of tho­se obli­ga­ti­ons in accordance with the exami­na­ti­on pro­ce­du­re laid down in Artic­le 73 para­graph 2. 

TITLE VIIIA GENERAL PURPOSE AI MODELS

Chap­ter 1 CLASSIFICATION RULES

Artic­le 52a Clas­si­fi­ca­ti­on of gene­ral pur­po­se AI models as gene­ral pur­po­se AI models with syste­mic risk

1. A gene­ral pur­po­se AI model shall be clas­si­fi­ed as gene­ral-pur­po­se AI model with syste­mic risk if it meets any of the fol­lo­wing cri­te­ria: (a) it has high impact capa­bi­li­ties eva­lua­ted on the basis of appro­pria­te tech­ni­cal tools and metho­do­lo­gies, inclu­ding indi­ca­tors and bench­marks; (b) based on a decis­i­on of the Com­mis­si­on, ex offi­cio or fol­lo­wing a qua­li­fi­ed alert by the sci­en­ti­fic panel that a gene­ral pur­po­se AI model has capa­bi­li­ties or impact equi­va­lent to tho­se of point (a). 2. A gene­ral pur­po­se AI model shall be pre­su­med to have high impact capa­bi­li­ties pur­su­ant to point a) of para­graph 1 when the cumu­la­ti­ve amount of com­pu­te used for its trai­ning mea­su­red in floa­ting point ope­ra­ti­ons (FLOPs) is grea­ter than 1025. 3. The Com­mis­si­on shall adopt dele­ga­ted acts in accordance with Artic­le 73(2) to amend the thres­holds listed in the para­graphs abo­ve, as well as to sup­ple­ment bench­marks and indi­ca­tors in light of evol­ving tech­no­lo­gi­cal deve­lo­p­ments, such as algo­rith­mic impro­ve­ments or increa­sed hard­ware effi­ci­en­cy, when neces­sa­ry, for the­se thres­holds to reflect the sta­te of the art. 

Artic­le 52b – Procedure

1. Whe­re a gene­ral pur­po­se AI model meets the requi­re­ments refer­red to in points (a) of Artic­le 52a(1), the rele­vant pro­vi­der shall noti­fy the Com­mis­si­on wit­hout delay and in any event within 2 weeks after tho­se requi­re­ments are met or it beco­mes known that the­se requi­re­ments will be met. That noti­fi­ca­ti­on shall include the infor­ma­ti­on neces­sa­ry to demon­stra­te that the rele­vant requi­re­ments have been met. If the Com­mis­si­on beco­mes awa­re of a gene­ral pur­po­se AI model pre­sen­ting syste­mic risks of which it has not been noti­fi­ed, it may deci­de to desi­gna­te it as a model with syste­mic risk. 2. The pro­vi­der of a gene­ral pur­po­se AI model that meets the requi­re­ments refer­red to in points (a) of Artic­le 52a(1) may pre­sent, with its noti­fi­ca­ti­on, suf­fi­ci­ent­ly sub­stan­tia­ted argu­ments to demon­stra­te that, excep­tio­nal­ly, alt­hough it meets the said requi­re­ments, the gene­ral-pur­po­se AI model does not pre­sent, due to its spe­ci­fic cha­rac­te­ri­stics, syste­mic risks and the­r­e­fo­re should not be clas­si­fi­ed as gene­ral-pur­po­se AI model with syste­mic risk. 3. Whe­re the Com­mis­si­on con­clu­des that the argu­ments sub­mit­ted pur­su­ant to para­graph 2 are not suf­fi­ci­ent­ly sub­stan­tia­ted and the rele­vant pro­vi­der was not able to demon­stra­te that the gene­ral pur­po­se AI model does not pre­sent, due to its spe­ci­fic cha­rac­te­ri­stics, syste­mic risks, it shall reject tho­se argu­ments and the gene­ral pur­po­se AI model shall be con­side­red as gene­ral pur­po­se AI model with syste­mic risk. 4. The Com­mis­si­on may desi­gna­te a gene­ral pur­po­se AI model as pre­sen­ting syste­mic risks, ex offi­cio or fol­lo­wing a qua­li­fi­ed alert of the sci­en­ti­fic panel pur­su­ant to point (a) of Artic­le 68h [Alerts of syste­mic risks by the sci­en­ti­fic panel] (1) on the basis of cri­te­ria set out in Annex IXc. The Com­mis­si­on shall be empowered to spe­ci­fy and update the cri­te­ria in Annex IXc by means of dele­ga­ted acts in accordance with Artic­le 74(2). 4a. Upon a rea­so­ned request of a pro­vi­der who­se model has been desi­gna­ted as a gene­ral pur­po­se AI model with syste­mic risk pur­su­ant to para­graph 4, the Com­mis­si­on shall take the request into account and may deci­de to reas­sess whe­ther the gene­ral pur­po­se AI model can still be con­side­red to pre­sent syste­mic risks on the basis of the cri­te­ria set out in Annex IXc. Such request shall con­tain objec­ti­ve, con­cre­te and new rea­sons that have ari­sen sin­ce the desi­gna­ti­on decis­i­on. Pro­vi­ders may request reas­sess­ment at the ear­liest six months after the desi­gna­ti­on decis­i­on. Whe­re the Com­mis­si­on, fol­lo­wing its reas­sess­ment, deci­des to main­tain the desi­gna­ti­on as gene­ral-pur­po­se AI model with syste­mic risk, pro­vi­ders may request reas­sess­ment at the ear­liest six months after this decis­i­on. 5. The Com­mis­si­on shall ensu­re that a list of gene­ral pur­po­se AI models with syste­mic risk is published and shall keep that list up to date, wit­hout pre­ju­di­ce to the need to respect and pro­tect intellec­tu­al pro­per­ty rights and con­fi­den­ti­al busi­ness infor­ma­ti­on or trade secrets in accordance with Uni­on and natio­nal law. 

Chap­ter 2 OBLIGATIONS FOR PROVIDERS OF GENERAL PURPOSE AI MODELS

Artic­le 52c – Obli­ga­ti­ons for pro­vi­ders of gene­ral pur­po­se AI models

1. Pro­vi­ders of gene­ral pur­po­se AI models shall: (a) draw up and keep up-to-date the tech­ni­cal docu­men­ta­ti­on of the model, inclu­ding its trai­ning and test­ing pro­cess and the results of its eva­lua­ti­on, which shall con­tain, at a mini­mum, the ele­ments set out in Annex IXa for the pur­po­se of pro­vi­ding it, upon request, to the AI Office and the natio­nal com­pe­tent aut­ho­ri­ties; (b) draw up, keep up-to-date and make available infor­ma­ti­on and docu­men­ta­ti­on to pro­vi­ders of AI systems who intend to inte­gra­te the gene­ral pur­po­se AI model in their AI system. Wit­hout pre­ju­di­ce to the need to respect and pro­tect intellec­tu­al pro­per­ty rights and con­fi­den­ti­al busi­ness infor­ma­ti­on or trade secrets in accordance with Uni­on and natio­nal law, the infor­ma­ti­on and docu­men­ta­ti­on shall: (i) enable pro­vi­ders of AI systems to have a good under­stan­ding of the capa­bi­li­ties and limi­ta­ti­ons of the gene­ral pur­po­se AI model and to com­ply with their obli­ga­ti­ons pur­su­ant to this Regu­la­ti­on; and (ii) con­tain, at a mini­mum, the ele­ments set out in Annex IXb. (c) put in place a poli­cy to respect Uni­on copy­right law in par­ti­cu­lar to iden­ti­fy and respect, inclu­ding through sta­te of the art tech­no­lo­gies, the reser­va­tions of rights expres­sed pur­su­ant to Artic­le 4(3) of Direc­ti­ve (EU) 2019/790; (d) draw up and make publicly available a suf­fi­ci­ent­ly detail­ed sum­ma­ry about the con­tent used for trai­ning of the gene­ral-pur­po­se AI model, accor­ding to a tem­p­la­te pro­vi­ded by the AI Office. ‑2. The obli­ga­ti­ons set out in para­graph 1, with the excep­ti­on of let­ters (c) and (d), shall not app­ly to pro­vi­ders of AI models that are made acce­s­si­ble to the public under a free and open licence that allo­ws for the access, usa­ge, modi­fi­ca­ti­on, and dis­tri­bu­ti­on of the model, and who­se para­me­ters, inclu­ding the weights, the infor­ma­ti­on on the model archi­tec­tu­re, and the infor­ma­ti­on on model usa­ge, are made publicly available. This excep­ti­on shall not app­ly to gene­ral pur­po­se AI models with syste­mic risks. 2. Pro­vi­ders of gene­ral pur­po­se AI models shall coope­ra­te as neces­sa­ry with the Com­mis­si­on and the natio­nal com­pe­tent aut­ho­ri­ties in the exer­cise of their com­pe­ten­ces and powers pur­su­ant to this Regu­la­ti­on. 3. Pro­vi­ders of gene­ral pur­po­se AI models may rely on codes of prac­ti­ce within the mea­ning of Artic­le 52e demon­stra­te com­pli­ance with the obli­ga­ti­ons in para­graph 1, until a har­mo­ni­s­ed stan­dard is published. Com­pli­ance with a Euro­pean har­mo­ni­s­ed stan­dard grants pro­vi­ders the pre­sump­ti­on of con­for­mi­ty. Pro­vi­ders of gene­ral pur­po­se AI models with syste­mic risks who do not adhe­re to an appro­ved code of prac­ti­ce shall demon­stra­te alter­na­ti­ve ade­qua­te means of com­pli­ance for appr­oval of the Com­mis­si­on. 4. For the pur­po­se of faci­li­ta­ting com­pli­ance with Annex IXa, nota­b­ly point 2, (d) and (e), the Com­mis­si­on shall be empowered to adopt dele­ga­ted acts in accordance with Artic­le 73 to detail mea­su­re­ment and cal­cu­la­ti­on metho­do­lo­gies with a view to allow com­pa­ra­ble and veri­fia­ble docu­men­ta­ti­on. 4a. The Com­mis­si­on is empowered to adopt dele­ga­ted acts in accordance with Artic­le 73(2) to amend Anne­xes IXa and IXb in the light of the evol­ving tech­no­lo­gi­cal deve­lo­p­ments. 4b. Any infor­ma­ti­on and docu­men­ta­ti­on obtai­ned pur­su­ant to the pro­vi­si­ons of this Artic­le, inclu­ding trade secrets, shall be trea­ted in com­pli­ance with the con­fi­den­tia­li­ty obli­ga­ti­ons set out in Artic­le 70. 

Artic­le 52ca – Aut­ho­ri­sed representative

1. Pri­or to pla­cing a gene­ral pur­po­se AI model on the Uni­on mar­ket pro­vi­ders estab­lished out­side the Uni­on shall, by writ­ten man­da­te, appoint an aut­ho­ri­sed repre­sen­ta­ti­ve which is estab­lished in the Uni­on and shall enable it to per­form its tasks under this Regu­la­ti­on. 2. The aut­ho­ri­sed repre­sen­ta­ti­ve shall per­form the tasks spe­ci­fi­ed in the man­da­te recei­ved from the pro­vi­der. It shall pro­vi­de a copy of the man­da­te to the AI Office upon request, in one of the offi­ci­al lan­guages of the insti­tu­ti­ons of the Uni­on. For the pur­po­se of this Regu­la­ti­on, the man­da­te shall empower the aut­ho­ri­sed repre­sen­ta­ti­ve to car­ry out the fol­lo­wing tasks: (a) veri­fy that the tech­ni­cal docu­men­ta­ti­on spe­ci­fi­ed in Annex IXa has been drawn up and all obli­ga­ti­ons refer­red to in Artic­les 52c and, whe­re appli­ca­ble, Artic­le 52d have been ful­fil­led by the pro­vi­der; (b) keep a copy of the tech­ni­cal docu­men­ta­ti­on at the dis­po­sal of the AI Office and natio­nal com­pe­tent aut­ho­ri­ties, for a peri­od ending 10 years after the model has been pla­ced on the mar­ket and the cont­act details of the pro­vi­der by which the aut­ho­ri­sed repre­sen­ta­ti­ve has been appoin­ted; (c) pro­vi­de the AI Office, upon a rea­so­ned request, with all the infor­ma­ti­on and docu­men­ta­ti­on, inclu­ding that kept accor­ding to point (a), neces­sa­ry to demon­stra­te the com­pli­ance with the obli­ga­ti­ons in this Tit­le; (d) coope­ra­te with the AI Office and natio­nal com­pe­tent aut­ho­ri­ties, upon a rea­so­ned request, on any action the lat­ter takes in rela­ti­on to the gene­ral pur­po­se AI model with syste­mic risks, inclu­ding when the model is inte­gra­ted into AI systems pla­ced on the mar­ket or put into ser­vice in the Uni­on. 3. The man­da­te shall empower the aut­ho­ri­sed repre­sen­ta­ti­ve to be addres­sed, in addi­ti­on to or instead of the pro­vi­der, by the AI Office or the natio­nal com­pe­tent aut­ho­ri­ties, on all issues rela­ted to ensu­ring com­pli­ance with this Regu­la­ti­on. 4. The aut­ho­ri­sed repre­sen­ta­ti­ve shall ter­mi­na­te the man­da­te if it con­siders or has rea­son to con­sider that the pro­vi­der acts con­tra­ry to its obli­ga­ti­ons under this Regu­la­ti­on. In such a case, it shall also imme­dia­te­ly inform the AI Office about the ter­mi­na­ti­on of the man­da­te and the rea­sons the­reof. 5. The obli­ga­ti­on set out in this artic­le shall not app­ly to pro­vi­ders of gene­ral pur­po­se AI models that are made acce­s­si­ble to the public under a free and open source licence that allo­ws for the access, usa­ge, modi­fi­ca­ti­on, and dis­tri­bu­ti­on of the model, and who­se para­me­ters, inclu­ding the weights, the infor­ma­ti­on on the model archi­tec­tu­re, and the infor­ma­ti­on on model usa­ge, are made publicly available, unless the gene­ral pur­po­se AI models pre­sent syste­mic risks. 

Chap­ter 3 OBLIGATIONS FOR PROVIDERS OF GENERAL PURPOSE AI MODELS WITH SYSTEMIC RISK

Artic­le 52d – Obli­ga­ti­ons for pro­vi­ders of gene­ral pur­po­se AI models with syste­mic risk

1. In addi­ti­on to the obli­ga­ti­ons listed in Artic­le 52c, pro­vi­ders of gene­ral pur­po­se AI models with syste­mic risk shall: (a) per­form model eva­lua­ti­on in accordance with stan­dar­di­sed pro­to­cols and tools reflec­ting the sta­te of the art, inclu­ding con­duc­ting and docu­men­ting adver­sa­ri­al test­ing of the model with a view to iden­ti­fy and miti­ga­te syste­mic risk; (b) assess and miti­ga­te pos­si­ble syste­mic risks at Uni­on level, inclu­ding their sources, that may stem from the deve­lo­p­ment, pla­cing on the mar­ket, or use of gene­ral pur­po­se AI models with syste­mic risk; (c) keep track of, docu­ment and report wit­hout undue delay to the AI Office and, as appro­pria­te, to natio­nal com­pe­tent aut­ho­ri­ties, rele­vant infor­ma­ti­on about serious inci­dents and pos­si­ble cor­rec­ti­ve mea­su­res to address them; (d) ensu­re an ade­qua­te level of cyber­se­cu­ri­ty pro­tec­tion for the gene­ral pur­po­se AI model with syste­mic risk and the phy­si­cal infras­truc­tu­re of the model. 2. Pro­vi­ders of gene­ral pur­po­se AI models with syste­mic risk may rely on codes of prac­ti­ce within the mea­ning of Artic­le E to demon­stra­te com­pli­ance with the obli­ga­ti­ons in para­graph 1, until a har­mo­ni­s­ed stan­dard is published. Com­pli­ance with a Euro­pean har­mo­ni­s­ed stan­dard grants pro­vi­ders the pre­sump­ti­on of con­for­mi­ty. Pro­vi­ders of gene­ral- pur­po­se AI models with syste­mic risks who do not adhe­re to an appro­ved code of prac­ti­ce shall demon­stra­te alter­na­ti­ve ade­qua­te means of com­pli­ance for appr­oval of the Com­mis­si­on. 3. Any infor­ma­ti­on and docu­men­ta­ti­on obtai­ned pur­su­ant to the pro­vi­si­ons of this Artic­le, inclu­ding trade secrets, shall be trea­ted in com­pli­ance with the con­fi­den­tia­li­ty obli­ga­ti­ons set out in Artic­le 70. 

Artic­le 52e – Codes of practice

1. The AI Office shall encou­ra­ge and faci­li­ta­te the dra­wing up of codes of prac­ti­ce at Uni­on level as an ele­ment to con­tri­bu­te to the pro­per appli­ca­ti­on of this Regu­la­ti­on, taking into account inter­na­tio­nal approa­ches. 2. The AI Office and the AI Board shall aim to ensu­re that the codes of prac­ti­ce cover, but not neces­s­a­ri­ly be limi­t­ed to, the obli­ga­ti­ons pro­vi­ded for in Artic­les 52c and 52d, inclu­ding the fol­lo­wing issues: (a) means to ensu­re that the infor­ma­ti­on refer­red to in Artic­le 52c (a) and (b) is kept up to date in the light of mar­ket and tech­no­lo­gi­cal deve­lo­p­ments, and the ade­qua­te level of detail for the sum­ma­ry about the con­tent used for trai­ning; (b) the iden­ti­fi­ca­ti­on of the type and natu­re of the syste­mic risks at Uni­on level, inclu­ding their sources when appro­pria­te; (c) the mea­su­res, pro­ce­du­res and moda­li­ties for the assess­ment and manage­ment of the syste­mic risks at Uni­on level, inclu­ding the docu­men­ta­ti­on the­reof. The assess­ment and manage­ment of the syste­mic risks at Uni­on level shall be pro­por­tio­na­te to the risks, take into con­side­ra­ti­on their seve­ri­ty and pro­ba­bi­li­ty and take into account the spe­ci­fic chal­lenges of tack­ling tho­se risks in the light of the pos­si­ble ways in which such risks may emer­ge and mate­ria­li­ze along the AI value chain. 3. The AI Office may invi­te the pro­vi­ders of gene­ral pur­po­se AI models, as well as rele­vant natio­nal com­pe­tent aut­ho­ri­ties, to par­ti­ci­pa­te in the dra­wing up of codes of prac­ti­ce. Civil socie­ty orga­ni­sa­ti­ons, indu­stry, aca­de­mia and other rele­vant stake­hol­ders, such as down­stream pro­vi­ders and inde­pen­dent experts, may sup­port the pro­cess. 4. The AI Office and the Board shall aim to ensu­re that the codes of prac­ti­ce cle­ar­ly set out their spe­ci­fic objec­ti­ves and con­tain com­mit­ments or mea­su­res, inclu­ding key per­for­mance indi­ca­tors as appro­pria­te, to ensu­re the achie­ve­ment of tho­se objec­ti­ves and take due account of the needs and inte­rests of all inte­re­sted par­ties, inclu­ding affec­ted per­sons, at Uni­on level. 5. The AI Office may invi­te all pro­vi­ders of gene­ral pur­po­se AI models to par­ti­ci­pa­te in the codes of prac­ti­ce. For pro­vi­ders of gene­ral pur­po­se AI models not pre­sen­ting syste­mic risks this par­ti­ci­pa­ti­on should be limi­t­ed to obli­ga­ti­ons fore­seen in para­graph 2 point (a) of this Artic­le, unless they decla­re expli­ci­t­ly their inte­rest to join the full code. 6. The AI Office shall aim to ensu­re that par­ti­ci­pan­ts to the codes of prac­ti­ce report regu­lar­ly to the AI Office on the imple­men­ta­ti­on of the com­mit­ments and the mea­su­res taken and their out­co­mes, inclu­ding as mea­su­red against the key per­for­mance indi­ca­tors as appro­pria­te. Key per­for­mance indi­ca­tors and report­ing com­mit­ments shall take into account dif­fe­ren­ces in size and capa­ci­ty bet­ween dif­fe­rent par­ti­ci­pan­ts. 7. The AI Office and the AI Board shall regu­lar­ly moni­tor and eva­lua­te the achie­ve­ment of the objec­ti­ves of the codes of prac­ti­ce by the par­ti­ci­pan­ts and their con­tri­bu­ti­on to the pro­per appli­ca­ti­on of this Regu­la­ti­on. The AI Office and the Board shall assess whe­ther the codes of prac­ti­ce cover the obli­ga­ti­ons pro­vi­ded for in Artic­les 52c and 52d, inclu­ding the issues listed in para­graph 2 of this Artic­le, and shall regu­lar­ly moni­tor and eva­lua­te the achie­ve­ment of their objec­ti­ves. They shall publish their assess­ment of the ade­qua­cy of the codes of prac­ti­ce. The Com­mis­si­on may, by way of imple­men­ting acts, deci­de to appro­ve the code of prac­ti­ce and give it a gene­ral vali­di­ty within the Uni­on. Tho­se imple­men­ting acts shall be adopted in accordance with the exami­na­ti­on pro­ce­du­re set out in Artic­le 74(2). 8. As appro­pria­te, the AI Office shall also encou­ra­ge and faci­li­ta­te review and adap­t­ati­on of the codes of prac­ti­ce, in par­ti­cu­lar in light of emer­ging stan­dards. The AI Office shall assist in the assess­ment of available stan­dards. 9. If, by the time the Regu­la­ti­on beco­mes appli­ca­ble, a Code of Prac­ti­ce can­not be fina­li­sed, or if the AI Office deems it is not ade­qua­te fol­lo­wing under para­graph 7, the Com­mis­si­on may pro­vi­de, by means of imple­men­ting acts, com­mon rules for the imple­men­ta­ti­on of the obli­ga­ti­ons pro­vi­ded for in Artic­les 52c and 52d, inclu­ding the issues set out in para­graph 2. 

TITLE V MEASURES IN SUPPORT OF INNOVATION

Artic­le 53 – AI regu­la­to­ry sandboxes

1. Mem­ber Sta­tes shall ensu­re that their com­pe­tent aut­ho­ri­ties estab­lish at least one AI regu­la­to­ry sand­box at natio­nal level, which shall be ope­ra­tio­nal 24 months after ent­ry into force. This sand­box may also be estab­lished joint­ly with one or seve­ral other Mem­ber Sta­tes’ com­pe­tent aut­ho­ri­ties. The Com­mis­si­on may pro­vi­de tech­ni­cal sup­port, advice and tools for the estab­lish­ment and ope­ra­ti­on of AI regu­la­to­ry sand­bo­xes. The obli­ga­ti­on estab­lished in pre­vious para­graph can also be ful­fil­led by par­ti­ci­pa­ti­on in an exi­sting sand­box inso­far as this par­ti­ci­pa­ti­on pro­vi­des equi­va­lent level of natio­nal covera­ge for the par­ti­ci­pa­ting Mem­ber Sta­tes. 1a. Addi­tio­nal AI regu­la­to­ry sand­bo­xes at regio­nal or local levels or joint­ly with other Mem­ber Sta­tes’ com­pe­tent aut­ho­ri­ties may also be estab­lished. 1b. The Euro­pean Data Pro­tec­tion Super­vi­sor may also estab­lish an AI regu­la­to­ry sand­box for the EU insti­tu­ti­ons, bodies and agen­ci­es and exer­cise the roles and the tasks of natio­nal com­pe­tent aut­ho­ri­ties in accordance with this chap­ter. 1c. Mem­ber Sta­tes shall ensu­re that com­pe­tent aut­ho­ri­ties refer­red to in para­graphs 1 and 1a allo­ca­te suf­fi­ci­ent resour­ces to com­ply with this Artic­le effec­tively and in a time­ly man­ner. Whe­re appro­pria­te, natio­nal com­pe­tent aut­ho­ri­ties shall coope­ra­te with other rele­vant aut­ho­ri­ties and may allow for the invol­vement of other actors within the AI eco­sy­stem. This Artic­le shall not affect other regu­la­to­ry sand­bo­xes estab­lished under natio­nal or Uni­on law. Mem­ber Sta­tes shall ensu­re an appro­pria­te level of coope­ra­ti­on bet­ween the aut­ho­ri­ties super­vi­sing tho­se other sand­bo­xes and the natio­nal com­pe­tent aut­ho­ri­ties. 1d. AI regu­la­to­ry sand­bo­xes estab­lished under Artic­le 53(1) of this Regu­la­ti­on shall, in accordance with Artic­les 53 and 53a, pro­vi­de for a con­trol­led envi­ron­ment that fosters inno­va­ti­on and faci­li­ta­tes the deve­lo­p­ment, trai­ning, test­ing and vali­da­ti­on of inno­va­ti­ve AI systems for a limi­t­ed time befo­re their pla­ce­ment on the mar­ket or put­ting into ser­vice pur­su­ant to a spe­ci­fic sand­box plan agreed bet­ween the pro­s­pec­ti­ve pro­vi­ders and the com­pe­tent aut­ho­ri­ty. Such regu­la­to­ry sand­bo­xes may include test­ing in real world con­di­ti­ons super­vi­sed in the sand­box. 1e. Com­pe­tent aut­ho­ri­ties shall pro­vi­de, as appro­pria­te, gui­dance, super­vi­si­on and sup­port within the sand­box with a view to iden­ti­fy­ing risks, in par­ti­cu­lar to fun­da­men­tal rights, health and safe­ty, test­ing, miti­ga­ti­on mea­su­res, and their effec­ti­ve­ness in rela­ti­on to the obli­ga­ti­ons and requi­re­ments of this Regu­la­ti­on and, whe­re rele­vant, other Uni­on and Mem­ber Sta­tes legis­la­ti­on super­vi­sed within the sand­box. 1f. Com­pe­tent aut­ho­ri­ties shall pro­vi­de pro­vi­ders and pro­s­pec­ti­ve pro­vi­ders with gui­dance on regu­la­to­ry expec­ta­ti­ons and how to ful­fil the requi­re­ments and obli­ga­ti­ons set out in this Regu­la­ti­on. Upon request of the pro­vi­der or pro­s­pec­ti­ve pro­vi­der of the AI system, the com­pe­tent aut­ho­ri­ty shall pro­vi­de a writ­ten pro­of of the acti­vi­ties suc­cessful­ly car­ri­ed out in the sand­box. The com­pe­tent aut­ho­ri­ty shall also pro­vi­de an exit report detail­ing the acti­vi­ties car­ri­ed out in the sand­box and the rela­ted results and lear­ning out­co­mes. Pro­vi­ders may use such docu­men­ta­ti­on to demon­stra­te the com­pli­ance with this Regu­la­ti­on through the con­for­mi­ty assess­ment pro­cess or rele­vant mar­ket sur­veil­lan­ce acti­vi­ties. In this regard, the exit reports and the writ­ten pro­of pro­vi­ded by the natio­nal com­pe­tent aut­ho­ri­ty shall be taken posi­tively into account by mar­ket sur­veil­lan­ce aut­ho­ri­ties and noti­fi­ed bodies, with a view to acce­le­ra­te con­for­mi­ty assess­ment pro­ce­du­res to a rea­sonable ext­ent. 1fa. Sub­ject to the con­fi­den­tia­li­ty pro­vi­si­ons in Artic­le 70 and with the agree­ment of the sand­box provider/prospective pro­vi­der, the Euro­pean Com­mis­si­on and the Board shall be aut­ho­ri­sed to access the exit reports and shall take them into account, as appro­pria­te, when exer­cis­ing their tasks under this Regu­la­ti­on. If both pro­vi­der and pro­s­pec­ti­ve pro­vi­der and the natio­nal com­pe­tent aut­ho­ri­ty expli­ci­t­ly agree to this, the exit report can be made publicly available through the sin­gle infor­ma­ti­on plat­form refer­red to in this artic­le. 1g. The estab­lish­ment of AI regu­la­to­ry sand­bo­xes shall aim to con­tri­bu­te to the fol­lo­wing objec­ti­ves: (a) impro­ve legal cer­tain­ty to achie­ve regu­la­to­ry com­pli­ance with this Regu­la­ti­on or, whe­re rele­vant, other appli­ca­ble Uni­on and Mem­ber Sta­tes legis­la­ti­on; (b) sup­port the sha­ring of best prac­ti­ces through coope­ra­ti­on with the aut­ho­ri­ties invol­ved in the AI regu­la­to­ry sand­box; (c) foster inno­va­ti­on and com­pe­ti­ti­ve­ness and faci­li­ta­te the deve­lo­p­ment of an AI eco­sy­stem; (d) con­tri­bu­te to evi­dence-based regu­la­to­ry lear­ning; (e) faci­li­ta­te and acce­le­ra­te access to the Uni­on mar­ket for AI systems, in par­ti­cu­lar when pro­vi­ded by small and medi­um-sized enter­pri­ses (SMEs), inclu­ding start- ups. 2. Natio­nal com­pe­tent aut­ho­ri­ties shall ensu­re that, to the ext­ent the inno­va­ti­ve AI systems invol­ve the pro­ce­s­sing of per­so­nal data or other­wi­se fall under the super­vi­so­ry remit of other natio­nal aut­ho­ri­ties or com­pe­tent aut­ho­ri­ties pro­vi­ding or sup­port­ing access to data, the natio­nal data pro­tec­tion aut­ho­ri­ties, and tho­se other natio­nal aut­ho­ri­ties are asso­cia­ted to the ope­ra­ti­on of the AI regu­la­to­ry sand­box and invol­ved in the super­vi­si­on of tho­se aspects to the ext­ent of their respec­ti­ve tasks and powers, as appli­ca­ble. 3. The AI regu­la­to­ry sand­bo­xes shall not affect the super­vi­so­ry and cor­rec­ti­ve powers of the com­pe­tent aut­ho­ri­ties super­vi­sing the sand­bo­xes, inclu­ding at regio­nal or local level. Any signi­fi­cant risks to health and safe­ty and fun­da­men­tal rights iden­ti­fi­ed during the deve­lo­p­ment and test­ing of such AI systems shall result in an ade­qua­te miti­ga­ti­on. Natio­nal com­pe­tent aut­ho­ri­ties shall have the power to tem­po­r­a­ri­ly or per­ma­nent­ly sus­pend the test­ing pro­cess, or par­ti­ci­pa­ti­on in the sand­box if no effec­ti­ve miti­ga­ti­on is pos­si­ble and inform the AI Office of such decis­i­on. Natio­nal com­pe­tent aut­ho­ri­ties shall exer­cise their super­vi­so­ry powers within the limits of the rele­vant legis­la­ti­on, using their dis­cretio­na­ry powers when imple­men­ting legal pro­vi­si­ons to a spe­ci­fic AI sand­box pro­ject, with the objec­ti­ve of sup­port­ing inno­va­ti­on in AI in the Uni­on. 4. Pro­vi­ders and pro­s­pec­ti­ve pro­vi­ders in the AI regu­la­to­ry sand­box shall remain lia­ble under appli­ca­ble Uni­on and Mem­ber Sta­tes lia­bi­li­ty legis­la­ti­on for any dama­ge inflic­ted on third par­ties as a result of the expe­ri­men­ta­ti­on taking place in the sand­box. Howe­ver, pro­vi­ded that the pro­s­pec­ti­ve provider(s) respect the spe­ci­fic plan and the terms and con­di­ti­ons for their par­ti­ci­pa­ti­on and fol­low in good faith the gui­dance given by the natio­nal com­pe­tent aut­ho­ri­ty, no admi­ni­stra­ti­ve fines shall be impo­sed by the aut­ho­ri­ties for inf­rin­ge­ments of this Regu­la­ti­on. To the ext­ent that other com­pe­tent aut­ho­ri­ties respon­si­ble for other Uni­on and Mem­ber Sta­tes’ legis­la­ti­on have been actively invol­ved in the super­vi­si­on of the AI system in the sand­box and have pro­vi­ded gui­dance for com­pli­ance, no admi­ni­stra­ti­ve fines shall be impo­sed regar­ding that legis­la­ti­on. 4b. The AI regu­la­to­ry sand­bo­xes shall be desi­gned and imple­men­ted in such a way that, whe­re rele­vant, they faci­li­ta­te cross-bor­der coope­ra­ti­on bet­ween natio­nal com­pe­tent aut­ho­ri­ties. 5. Natio­nal com­pe­tent aut­ho­ri­ties shall coor­di­na­te their acti­vi­ties and coope­ra­te within the frame­work of the Board. 5a. Natio­nal com­pe­tent aut­ho­ri­ties shall inform the AI Office and the Board of the estab­lish­ment of a sand­box and may ask for sup­port and gui­dance. A list of plan­ned and exi­sting AI sand­bo­xes shall be made publicly available by the AI Office and kept up to date in order to encou­ra­ge more inter­ac­tion in the regu­la­to­ry sand­bo­xes and cross-bor­der coope­ra­ti­on. 5b. Natio­nal com­pe­tent aut­ho­ri­ties shall sub­mit to the AI Office and to the Board, annu­al reports, start­ing one year after the estab­lish­ment of the AI regu­la­to­ry sand­box and then every year until its ter­mi­na­ti­on and a final report. Tho­se reports shall pro­vi­de infor­ma­ti­on on the pro­gress and results of the imple­men­ta­ti­on of tho­se sand­bo­xes, inclu­ding best prac­ti­ces, inci­dents, les­sons lear­nt and recom­men­da­ti­ons on their set­up and, whe­re rele­vant, on the appli­ca­ti­on and pos­si­ble revi­si­on of this Regu­la­ti­on, inclu­ding its dele­ga­ted and imple­men­ting acts, and other Uni­on law super­vi­sed within the sand­box. Tho­se annu­al reports or abstracts the­reof shall be made available to the public, online. The Com­mis­si­on shall, whe­re appro­pria­te, take the annu­al reports into account when exer­cis­ing their tasks under this Regu­la­ti­on. 6. The Com­mis­si­on shall deve­lop a sin­gle and dedi­ca­ted inter­face con­tai­ning all rele­vant infor­ma­ti­on rela­ted to sand­bo­xes to allow stake­hol­ders to inter­act with regu­la­to­ry sand­bo­xes and to rai­se enqui­ries with com­pe­tent aut­ho­ri­ties, and to seek non-bin­ding gui­dance on the con­for­mi­ty of inno­va­ti­ve pro­ducts, ser­vices, busi­ness models embed­ding AI tech­no­lo­gies, in accordance with Artic­le 55(1)(c). The Com­mis­si­on shall proac­tively coor­di­na­te with natio­nal com­pe­tent aut­ho­ri­ties, whe­re relevant. 

Artic­le 53 – Moda­li­ties and func­tio­ning of AI regu­la­to­ry sandboxes

1. In order to avo­id frag­men­ta­ti­on across the Uni­on, the Com­mis­si­on shall adopt an imple­men­ting act detail­ing the moda­li­ties for the estab­lish­ment, deve­lo­p­ment, imple­men­ta­ti­on, ope­ra­ti­on and super­vi­si­on of the AI regu­la­to­ry sand­bo­xes. The imple­men­ting act shall include com­mon prin­ci­ples on the fol­lo­wing issues: (a) eli­gi­bi­li­ty and sel­ec­tion for par­ti­ci­pa­ti­on in the AI regu­la­to­ry sand­box; (b) pro­ce­du­re for the appli­ca­ti­on, par­ti­ci­pa­ti­on, moni­to­ring, exi­ting from and ter­mi­na­ti­on of the AI regu­la­to­ry sand­box, inclu­ding the sand­box plan and the exit report; (c) the terms and con­di­ti­ons appli­ca­ble to the par­ti­ci­pan­ts. The imple­men­ting acts shall ensu­re that: (a) regu­la­to­ry sand­bo­xes are open to any app­ly­ing pro­s­pec­ti­ve pro­vi­der of an AI system who ful­fils eli­gi­bi­li­ty and sel­ec­tion cri­te­ria. The cri­te­ria for acce­s­sing to the regu­la­to­ry sand­box are trans­pa­rent and fair and estab­li­shing aut­ho­ri­ties inform appli­cants of their decis­i­on within 3 months of the appli­ca­ti­on; (b) regu­la­to­ry sand­bo­xes allow broad and equal access and keep up with demand for par­ti­ci­pa­ti­on; pro­s­pec­ti­ve pro­vi­ders may also sub­mit appli­ca­ti­ons in part­ner­ships with users and other rele­vant third par­ties; (c) the moda­li­ties and con­di­ti­ons con­cer­ning regu­la­to­ry sand­bo­xes shall to the best ext­ent pos­si­ble sup­port fle­xi­bi­li­ty for natio­nal com­pe­tent aut­ho­ri­ties to estab­lish and ope­ra­te their AI regu­la­to­ry sand­bo­xes; (d) access to the AI regu­la­to­ry sand­bo­xes is free of char­ge for SMEs and start-ups wit­hout pre­ju­di­ce to excep­tio­nal costs that natio­nal com­pe­tent aut­ho­ri­ties may reco­ver in a fair and pro­por­tio­na­te man­ner; (e) they faci­li­ta­te pro­s­pec­ti­ve pro­vi­ders, by means of the lear­ning out­co­mes of the sand­bo­xes, to con­duct the con­for­mi­ty assess­ment obli­ga­ti­ons of this Regu­la­ti­on or the vol­un­t­a­ry appli­ca­ti­on of the codes of con­duct refer­red to in Artic­le 69; (f) regu­la­to­ry sand­bo­xes faci­li­ta­te the invol­vement of other rele­vant actors within the AI eco­sy­stem, such as noti­fi­ed bodies and stan­dar­di­sati­on orga­ni­sa­ti­ons (SMEs, start- ups, enter­pri­ses, inno­va­tors, test­ing and expe­ri­men­ta­ti­on faci­li­ties, rese­arch and expe­ri­men­ta­ti­on labs and digi­tal inno­va­ti­on hubs, cen­tres of excel­lence, indi­vi­du­al rese­ar­chers), in order to allow and faci­li­ta­te coope­ra­ti­on with the public and pri­va­te sec­tor; (g) pro­ce­du­res, pro­ce­s­ses and admi­ni­stra­ti­ve requi­re­ments for appli­ca­ti­on, sel­ec­tion, par­ti­ci­pa­ti­on and exi­ting the sand­box are simp­le, easi­ly intel­li­gi­ble, cle­ar­ly com­mu­ni­ca­ted in order to faci­li­ta­te the par­ti­ci­pa­ti­on of SMEs and start-ups with limi­t­ed legal and admi­ni­stra­ti­ve capa­ci­ties and are stream­lined across the Uni­on, in order to avo­id frag­men­ta­ti­on and that par­ti­ci­pa­ti­on in a regu­la­to­ry sand­box estab­lished by a Mem­ber Sta­te, or by the EDPS is mutual­ly and uni­form­ly reco­g­nis­ed and car­ri­es the same legal effects across the Uni­on; (h) par­ti­ci­pa­ti­on in the AI regu­la­to­ry sand­box is limi­t­ed to a peri­od that is appro­pria­te to the com­ple­xi­ty and sca­le of the pro­ject. This peri­od may be exten­ded by the natio­nal com­pe­tent aut­ho­ri­ty; (i) the sand­bo­xes shall faci­li­ta­te the deve­lo­p­ment of tools and infras­truc­tu­re for test­ing, bench­mar­king, asses­sing and explai­ning dimen­si­ons of AI systems rele­vant for regu­la­to­ry lear­ning, such as accu­ra­cy, robust­ness and cyber­se­cu­ri­ty as well as mea­su­res to miti­ga­te risks to fun­da­men­tal rights and the socie­ty at lar­ge. 3. Pro­s­pec­ti­ve pro­vi­ders in the sand­bo­xes, in par­ti­cu­lar SMEs and start-ups, shall be direc­ted, whe­re rele­vant, to pre-deployment ser­vices such as gui­dance on the imple­men­ta­ti­on of this Regu­la­ti­on, to other value-adding ser­vices such as help with stan­dar­di­sati­on docu­ments and cer­ti­fi­ca­ti­on, Test­ing & Expe­ri­men­ta­ti­on Faci­li­ties, Digi­tal Hubs and Cen­tres of Excel­lence. 4. When natio­nal com­pe­tent aut­ho­ri­ties con­sider aut­ho­ri­sing test­ing in real world con­di­ti­ons super­vi­sed within the frame­work of an AI regu­la­to­ry sand­box estab­lished under this Artic­le, they shall spe­ci­fi­cal­ly agree with the par­ti­ci­pan­ts on the terms and con­di­ti­ons of such test­ing and in par­ti­cu­lar on the appro­pria­te safe­guards with the view to pro­tect fun­da­men­tal rights, health and safe­ty. Whe­re appro­pria­te, they shall coope­ra­te with other natio­nal com­pe­tent aut­ho­ri­ties with a view to ensu­re con­si­stent prac­ti­ces across the Union. 

Artic­le 54 – Fur­ther pro­ce­s­sing of per­so­nal data for deve­lo­ping cer­tain AI systems in the public inte­rest in the AI regu­la­to­ry sandbox

1. In the AI regu­la­to­ry sand­box per­so­nal data lawful­ly coll­ec­ted for other pur­po­ses may be pro­ce­s­sed sole­ly for the pur­po­ses of deve­lo­ping, trai­ning and test­ing cer­tain AI systems in the sand­box when all of the fol­lo­wing con­di­ti­ons are met: (a) AI systems shall be deve­lo­ped for safe­guar­ding sub­stan­ti­al public inte­rest by a public aut­ho­ri­ty or ano­ther natu­ral or legal per­son gover­ned by public law or by pri­va­te law and in one or more of the fol­lo­wing are­as: (ii) public safe­ty and public health, inclu­ding dise­a­se detec­tion, dia­gno­sis pre­ven­ti­on, con­trol and tre­at­ment and impro­ve­ment of health care systems; (iii) a high level of pro­tec­tion and impro­ve­ment of the qua­li­ty of the envi­ron­ment, pro­tec­tion of bio­di­ver­si­ty, pol­lu­ti­on as well as green tran­si­ti­on, cli­ma­te chan­ge miti­ga­ti­on and adap­t­ati­on; (iiia) ener­gy sus­taina­bi­li­ty; (iiib) safe­ty and resi­li­ence of trans­port systems and mobi­li­ty, cri­ti­cal infras­truc­tu­re and net­works; (iiic) effi­ci­en­cy and qua­li­ty of public admi­ni­stra­ti­on and public ser­vices; (b) the data pro­ce­s­sed are neces­sa­ry for com­ply­ing with one or more of the requi­re­ments refer­red to in Tit­le III, Chap­ter 2 whe­re tho­se requi­re­ments can­not be effec­tively ful­fil­led by pro­ce­s­sing anony­mi­sed, syn­the­tic or other non-per­so­nal data; (c) the­re are effec­ti­ve moni­to­ring mecha­nisms to iden­ti­fy if any high risks to the rights and free­doms of the data sub­jects, as refer­red to in Artic­le 35 of Regu­la­ti­on (EU) 2016/679 and in Artic­le 39 of Regu­la­ti­on (EU) 2018/1725, may ari­se during the sand­box expe­ri­men­ta­ti­on as well as respon­se mecha­nism to prompt­ly miti­ga­te tho­se risks and, whe­re neces­sa­ry, stop the pro­ce­s­sing; (d) any per­so­nal data to be pro­ce­s­sed in the con­text of the sand­box are in a func­tion­al­ly sepa­ra­te, iso­la­ted and pro­tec­ted data pro­ce­s­sing envi­ron­ment under the con­trol of the pro­s­pec­ti­ve pro­vi­der and only aut­ho­ri­sed per­sons have access to that tho­se data; (e) pro­vi­ders can only fur­ther share the ori­gi­nal­ly coll­ec­ted data in com­pli­ance with EU data pro­tec­tion law. Any per­so­nal data cra­ted in the sand­box can­not be shared out­side the sand­box; (f) any pro­ce­s­sing of per­so­nal data in the con­text of the sand­box do not lead to mea­su­res or decis­i­ons affec­ting the data sub­jects nor affect the appli­ca­ti­on of their rights laid down in Uni­on law on the pro­tec­tion of per­so­nal data; (g) any per­so­nal data pro­ce­s­sed in the con­text of the sand­box are pro­tec­ted by means of appro­pria­te tech­ni­cal and orga­ni­sa­tio­nal mea­su­res and dele­ted once the par­ti­ci­pa­ti­on in the sand­box has ter­mi­na­ted or the per­so­nal data has rea­ched the end of its reten­ti­on peri­od; (h) the logs of the pro­ce­s­sing of per­so­nal data in the con­text of the sand­box are kept for the dura­ti­on of the par­ti­ci­pa­ti­on in the sand­box, unless pro­vi­ded other­wi­se by Uni­on or natio­nal law; (i) com­ple­te and detail­ed descrip­ti­on of the pro­cess and ratio­na­le behind the trai­ning, test­ing and vali­da­ti­on of the AI system is kept tog­e­ther with the test­ing results as part of the tech­ni­cal docu­men­ta­ti­on in Annex IV; (j) a short sum­ma­ry of the AI pro­ject deve­lo­ped in the sand­box, its objec­ti­ves and expec­ted results published on the web­site of the com­pe­tent aut­ho­ri­ties. This obli­ga­ti­on shall not cover sen­si­ti­ve ope­ra­tio­nal data in rela­ti­on to the acti­vi­ties of law enforce­ment, bor­der con­trol, immi­gra­ti­on or asyl­um aut­ho­ri­ties. 1a. For the pur­po­se of pre­ven­ti­on, inve­sti­ga­ti­on, detec­tion or pro­se­cu­ti­on of cri­mi­nal offen­ces or the exe­cu­ti­on of cri­mi­nal pen­al­ties, inclu­ding the safe­guar­ding against and the pre­ven­ti­on of thre­ats to public secu­ri­ty, under the con­trol and respon­si­bi­li­ty of law enforce­ment aut­ho­ri­ties, the pro­ce­s­sing of per­so­nal data in AI regu­la­to­ry sand­bo­xes shall be based on a spe­ci­fic Mem­ber Sta­te or Uni­on law and sub­ject to the same cumu­la­ti­ve con­di­ti­ons as refer­red to in para­graph 1. 2. Para­graph 1 is wit­hout pre­ju­di­ce to Uni­on or Mem­ber Sta­tes legis­la­ti­on exclu­ding pro­ce­s­sing for other pur­po­ses than tho­se expli­ci­t­ly men­tio­ned in that legis­la­ti­on, as well as to Uni­on or Mem­ber Sta­tes laws lay­ing down the basis for the pro­ce­s­sing of per­so­nal data which is neces­sa­ry for the pur­po­se of deve­lo­ping, test­ing and trai­ning of inno­va­ti­ve AI systems or any other legal basis, in com­pli­ance with Uni­on law on the pro­tec­tion of per­so­nal data. 

Artic­le 54a – Test­ing of high-risk AI systems in real world con­di­ti­ons out­side AI regu­la­to­ry sandboxes

1. Test­ing of AI systems in real world con­di­ti­ons out­side AI regu­la­to­ry sand­bo­xes may be con­duc­ted by pro­vi­ders or pro­s­pec­ti­ve pro­vi­ders of high-risk AI systems listed in Annex III, in accordance with the pro­vi­si­ons of this Artic­le and the real-world test­ing plan refer­red to in this Artic­le, wit­hout pre­ju­di­ce to the pro­hi­bi­ti­ons under Artic­le 5. The detail­ed ele­ments of the real world test­ing plan shall be spe­ci­fi­ed in imple­men­ting acts adopted by the Com­mis­si­on in accordance with the exami­na­ti­on pro­ce­du­re refer­red to in Artic­le 74(2). This pro­vi­si­on shall be wit­hout pre­ju­di­ce to Uni­on or natio­nal law for the test­ing in real world con­di­ti­ons of high-risk AI systems rela­ted to pro­ducts cover­ed by legis­la­ti­on listed in Annex II. 2. Pro­vi­ders or pro­s­pec­ti­ve pro­vi­ders may con­duct test­ing of high-risk AI systems refer­red to in Annex III in real world con­di­ti­ons at any time befo­re the pla­cing on the mar­ket or put­ting into ser­vice of the AI system on their own or in part­ner­ship with one or more pro­s­pec­ti­ve deployers. 3. The test­ing of high-risk AI systems in real world con­di­ti­ons under this Artic­le shall be wit­hout pre­ju­di­ce to ethi­cal review that may be requi­red by natio­nal or Uni­on law. 4. Pro­vi­ders or pro­s­pec­ti­ve pro­vi­ders may con­duct the test­ing in real world con­di­ti­ons only whe­re all of the fol­lo­wing con­di­ti­ons are met: (a) the pro­vi­der or pro­s­pec­ti­ve pro­vi­der has drawn up a real world test­ing plan and sub­mit­ted it to the mar­ket sur­veil­lan­ce aut­ho­ri­ty in the Mem­ber State(s) whe­re the test­ing in real world con­di­ti­ons is to be con­duc­ted; (b) the mar­ket sur­veil­lan­ce aut­ho­ri­ty in the Mem­ber State(s) whe­re the test­ing in real world con­di­ti­ons is to be con­duc­ted has appro­ved the test­ing in real world con­di­ti­ons and the real world test­ing plan. Whe­re the mar­ket sur­veil­lan­ce aut­ho­ri­ty in that Mem­ber Sta­te has not pro­vi­ded with an ans­wer in 30 days, the test­ing in real world con­di­ti­ons and the real world test­ing plan shall be under­s­tood as appro­ved. In cases whe­re natio­nal law does not fore­see a tacit appr­oval, the test­ing in real world con­di­ti­ons shall be sub­ject to an aut­ho­ri­sa­ti­on; (c) the pro­vi­der or pro­s­pec­ti­ve pro­vi­der with the excep­ti­on of high-risk AI systems refer­red to in Annex III, points 1, 6 and 7 in the are­as of law enforce­ment, migra­ti­on, asyl­um and bor­der con­trol manage­ment, and high risk AI systems refer­red to in Annex III point 2, has regi­stered the test­ing in real world con­di­ti­ons in the non-public part of the EU data­ba­se refer­red to in Artic­le 60(3) with a Uni­on-wide uni­que sin­gle iden­ti­fi­ca­ti­on num­ber and the infor­ma­ti­on spe­ci­fi­ed in Annex VII­Ia; (d) the pro­vi­der or pro­s­pec­ti­ve pro­vi­der con­duc­ting the test­ing in real world con­di­ti­ons is estab­lished in the Uni­on or it has appoin­ted a legal repre­sen­ta­ti­ve who is estab­lished in the Uni­on; (e) data coll­ec­ted and pro­ce­s­sed for the pur­po­se of the test­ing in real world con­di­ti­ons shall only be trans­fer­red to third count­ries out­side the Uni­on pro­vi­ded appro­pria­te and appli­ca­ble safe­guards under Uni­on law are imple­men­ted; (f) the test­ing in real world con­di­ti­ons does not last lon­ger than neces­sa­ry to achie­ve its objec­ti­ves and in any case not lon­ger than 6 months, which may be exten­ded for an addi­tio­nal amount of 6 months, sub­ject to pri­or noti­fi­ca­ti­on by the pro­vi­der to the mar­ket sur­veil­lan­ce aut­ho­ri­ty, accom­pa­nied by an expl­ana­ti­on on the need for such time exten­si­on; (g) per­sons belon­ging to vul­nerable groups due to their age, phy­si­cal or men­tal disa­bi­li­ty are appro­pria­te­ly pro­tec­ted; (h) whe­re a pro­vi­der or pro­s­pec­ti­ve pro­vi­der orga­ni­s­es the test­ing in real world con­di­ti­ons in coope­ra­ti­on with one or more pro­s­pec­ti­ve deployers, the lat­ter have been infor­med of all aspects of the test­ing that are rele­vant to their decis­i­on to par­ti­ci­pa­te, and given the rele­vant ins­truc­tions on how to use the AI system refer­red to in Artic­le 13; the pro­vi­der or pro­s­pec­ti­ve pro­vi­der and the deployer(s) shall con­clude an agree­ment spe­ci­fy­ing their roles and respon­si­bi­li­ties with a view to ensu­ring com­pli­ance with the pro­vi­si­ons for test­ing in real world con­di­ti­ons under this Regu­la­ti­on and other appli­ca­ble Uni­on and Mem­ber Sta­tes legis­la­ti­on; (i) the sub­jects of the test­ing in real world con­di­ti­ons have given infor­med con­sent in accordance with Artic­le 54b, or in the case of law enforce­ment, whe­re the see­king of infor­med con­sent would pre­vent the AI system from being tested, the test­ing its­elf and the out­co­me of the test­ing in the real world con­di­ti­ons shall not have any nega­ti­ve effect on the sub­ject and his or her per­so­nal data shall be dele­ted after the test is per­for­med; (j) the test­ing in real world con­di­ti­ons is effec­tively over­seen by the pro­vi­der or pro­s­pec­ti­ve pro­vi­der and deployer(s) with per­sons who are sui­ta­b­ly qua­li­fi­ed in the rele­vant field and have the neces­sa­ry capa­ci­ty, trai­ning and aut­ho­ri­ty to per­form their tasks; (k) the pre­dic­tions, recom­men­da­ti­ons or decis­i­ons of the AI system can be effec­tively rever­sed and dis­re­gard­ed. 5 Any sub­ject of the test­ing in real world con­di­ti­ons, or his or her legal­ly desi­gna­ted repre­sen­ta­ti­ve, as appro­pria­te, may, wit­hout any resul­ting detri­ment and wit­hout having to pro­vi­de any justi­fi­ca­ti­on, with­draw from the test­ing at any time by revo­king his or her infor­med con­sent and request the imme­dia­te and per­ma­nent dele­ti­on of their per­so­nal data. The with­dra­wal of the infor­med con­sent shall not affect the acti­vi­ties alre­a­dy car­ri­ed out. 5a. In accordance with Artic­le 63a, Mem­ber Sta­tes shall con­fer their mar­ket sur­veil­lan­ce aut­ho­ri­ties the powers of requi­ring pro­vi­ders and pro­s­pec­ti­ve pro­vi­ders infor­ma­ti­on, of car­ry­ing out unan­noun­ced remo­te or on-site inspec­tions and on per­forming checks on the deve­lo­p­ment of the test­ing in real world con­di­ti­ons and the rela­ted pro­ducts. Mar­ket sur­veil­lan­ce aut­ho­ri­ties shall use the­se powers to ensu­re a safe deve­lo­p­ment of the­se tests. 6. Any serious inci­dent iden­ti­fi­ed in the cour­se of the test­ing in real world con­di­ti­ons shall be repor­ted to the natio­nal mar­ket sur­veil­lan­ce aut­ho­ri­ty in accordance with Artic­le 62 of this Regu­la­ti­on. The pro­vi­der or pro­s­pec­ti­ve pro­vi­der shall adopt imme­dia­te miti­ga­ti­on mea­su­res or, fai­ling that, sus­pend the test­ing in real world con­di­ti­ons until such miti­ga­ti­on takes place or other­wi­se ter­mi­na­te it. The pro­vi­der or pro­s­pec­ti­ve pro­vi­der shall estab­lish a pro­ce­du­re for the prompt recall of the AI system upon such ter­mi­na­ti­on of the test­ing in real world con­di­ti­ons. 7. Pro­vi­ders or pro­s­pec­ti­ve pro­vi­ders shall noti­fy the natio­nal mar­ket sur­veil­lan­ce aut­ho­ri­ty in the Mem­ber State(s) whe­re the test­ing in real world con­di­ti­ons is to be con­duc­ted of the sus­pen­si­on or ter­mi­na­ti­on of the test­ing in real world con­di­ti­ons and the final out­co­mes. 8. The pro­vi­der and pro­s­pec­ti­ve pro­vi­der shall be lia­ble under appli­ca­ble Uni­on and Mem­ber Sta­tes lia­bi­li­ty legis­la­ti­on for any dama­ge cau­sed in the cour­se of their par­ti­ci­pa­ti­on in the test­ing in real world conditions. 

Artic­le 54b – Infor­med con­sent to par­ti­ci­pa­te in test­ing in real world con­di­ti­ons out­side AI regu­la­to­ry sandboxes

1. For the pur­po­se of test­ing in real world con­di­ti­ons under Artic­le 54a, infor­med con­sent shall be free­ly given by the sub­ject of test­ing pri­or to his or her par­ti­ci­pa­ti­on in such test­ing and after having been duly infor­med with con­cise, clear, rele­vant, and under­stan­da­ble infor­ma­ti­on regar­ding: (i) the natu­re and objec­ti­ves of the test­ing in real world con­di­ti­ons and the pos­si­ble incon­ve­ni­ence that may be lin­ked to his or her par­ti­ci­pa­ti­on; (ii) the con­di­ti­ons under which the test­ing in real world con­di­ti­ons is to be con­duc­ted, inclu­ding the expec­ted dura­ti­on of the subject’s par­ti­ci­pa­ti­on; (iii) the subject’s rights and gua­ran­tees regar­ding par­ti­ci­pa­ti­on, in par­ti­cu­lar his or her right to refu­se to par­ti­ci­pa­te in and the right to with­draw from test­ing in real world con­di­ti­ons at any time wit­hout any resul­ting detri­ment and wit­hout having to pro­vi­de any justi­fi­ca­ti­on; (iv) the moda­li­ties for reque­st­ing the rever­sal or the dis­re­gard of the pre­dic­tions, recom­men­da­ti­ons or decis­i­ons of the AI system; (v) the Uni­on-wide uni­que sin­gle iden­ti­fi­ca­ti­on num­ber of the test­ing in real world con­di­ti­ons in accordance with Artic­le 54a(4c) and the cont­act details of the pro­vi­der or its legal repre­sen­ta­ti­ve from whom fur­ther infor­ma­ti­on can be obtai­ned. 2 The infor­med con­sent shall be dated and docu­men­ted and a copy shall be given to the sub­ject or his or her legal representative. 

Artic­le 55 – Mea­su­res for pro­vi­ders and deployers, in par­ti­cu­lar SMEs, inclu­ding start-ups

1. Mem­ber Sta­tes shall under­ta­ke the fol­lo­wing actions: (a) pro­vi­de SMEs, inclu­ding start-ups, having a regi­stered office or a branch in the Uni­on, with prio­ri­ty access to the AI regu­la­to­ry sand­bo­xes, to the ext­ent that they ful­fil the eli­gi­bi­li­ty con­di­ti­ons and sel­ec­tion cri­te­ria. The prio­ri­ty access shall not pre­clude other SMEs inclu­ding start-ups other than tho­se refer­red to in the first sub­pa­ra­graph to access to the AI regu­la­to­ry sand­box, pro­vi­ded that they ful­fil the eli­gi­bi­li­ty con­di­ti­ons and sel­ec­tion cri­te­ria; (b) orga­ni­se spe­ci­fic awa­re­ness rai­sing and trai­ning acti­vi­ties on the appli­ca­ti­on of this Regu­la­ti­on tail­o­red to the needs of SMEs inclu­ding start-ups, users and, as appro­pria­te, local public aut­ho­ri­ties; (c) uti­li­se exi­sting dedi­ca­ted chan­nels and whe­re appro­pria­te, estab­lish new ones for com­mu­ni­ca­ti­on with SMEs inclu­ding start-ups, users, other inno­va­tors and, as appro­pria­te, local public aut­ho­ri­ties to pro­vi­de advice and respond to queries about the imple­men­ta­ti­on of this Regu­la­ti­on, inclu­ding as regards par­ti­ci­pa­ti­on in AI regu­la­to­ry sand­bo­xes; (ca) faci­li­ta­te the par­ti­ci­pa­ti­on of SMEs and other rele­vant stake­hol­ders in the stan­dar­di­sati­on deve­lo­p­ment pro­cess. 2. The spe­ci­fic inte­rests and needs of the SME pro­vi­ders, inclu­ding start-ups, shall be taken into account when set­ting the fees for con­for­mi­ty assess­ment under Artic­le 43, redu­cing tho­se fees pro­por­tio­na­te­ly to their size, mar­ket size and other rele­vant indi­ca­tors. 2a. The AI Office shall under­ta­ke the fol­lo­wing actions: (a) upon request of the AI Board, pro­vi­de stan­dar­di­sed tem­pla­tes for the are­as cover­ed by this Regu­la­ti­on; (b) deve­lop and main­tain a sin­gle infor­ma­ti­on plat­form pro­vi­ding easy to use infor­ma­ti­on in rela­ti­on to this Regu­la­ti­on for all ope­ra­tors across the Uni­on; (c) orga­ni­se appro­pria­te com­mu­ni­ca­ti­on cam­paigns to rai­se awa­re­ness about the obli­ga­ti­ons ari­sing from this Regu­la­ti­on; (d) eva­lua­te and pro­mo­te the con­ver­gence of best prac­ti­ces in public pro­cu­re­ment pro­ce­du­res in rela­ti­on to AI systems. 

Artic­le 55a – Dero­ga­ti­ons for spe­ci­fic operators

2b. Microen­ter­pri­ses as defi­ned in Artic­le 2(3) of the Annex to the Com­mis­si­on Recom­men­da­ti­on 2003/361/EC con­cer­ning the defi­ni­ti­on of micro, small and medi­um- sized enter­pri­ses, pro­vi­ded tho­se enter­pri­ses do not have part­ner enter­pri­ses or lin­ked enter­pri­ses as defi­ned in Artic­le 3 of the same Annex may ful­fil cer­tain ele­ments of the qua­li­ty manage­ment system requi­red by Artic­le 17 of this Regu­la­ti­on in a sim­pli­fi­ed man­ner. For this pur­po­se, the Com­mis­si­on shall deve­lop gui­de­lines on the ele­ments of the qua­li­ty manage­ment system which may be ful­fil­led in a sim­pli­fi­ed man­ner con­side­ring the needs of micro enter­pri­ses wit­hout affec­ting the level of pro­tec­tion and the need for com­pli­ance with the requi­re­ments for high-risk AI systems. 2c. Para­graph 1 shall not be inter­pre­ted as exemp­ting tho­se ope­ra­tors from ful­fil­ling any other requi­re­ments and obli­ga­ti­ons laid down in this Regu­la­ti­on, inclu­ding tho­se estab­lished in Artic­les 9, 10, 11, 12, 13, 14, 15, 61 and 62. 

TITLE VI GOVERNANCE

Artic­le 55b – Gover­nan­ce at Uni­on level

1. The Com­mis­si­on shall deve­lop Uni­on exper­ti­se and capa­bi­li­ties in the field of arti­fi­ci­al intel­li­gence. For this pur­po­se, the Com­mis­si­on has estab­lished the Euro­pean AI Office by Decis­i­on […]. 2. Mem­ber Sta­tes shall faci­li­ta­te the tasks ent­ru­sted to the AI Office, as reflec­ted in this Regulation. 

Chap­ter 1 EUROPEAN ARTIFICIAL INTELLIGENCE BOARD

Artic­le 56 – Estab­lish­ment and struc­tu­re of the Euro­pean Arti­fi­ci­al Intel­li­gence Board

1. A ‘Euro­pean Arti­fi­ci­al Intel­li­gence Board’ (the ‘Board’) is estab­lished. 2. The Board shall be com­po­sed of one repre­sen­ta­ti­ve per Mem­ber Sta­te. The Euro­pean Data Pro­tec­tion Super­vi­sor shall par­ti­ci­pa­te as obser­ver. The AI Office shall also attend the Board’s mee­tings wit­hout taking part in the votes. Other natio­nal and Uni­on aut­ho­ri­ties, bodies or experts may be invi­ted to the mee­tings by the Board on a case by case basis, whe­re the issues dis­cus­sed are of rele­van­ce for them. 2a. Each repre­sen­ta­ti­ve shall be desi­gna­ted by their Mem­ber Sta­te for a peri­od of 3 years, rene­wa­ble once. 2b. Mem­ber Sta­tes shall ensu­re that their repre­sen­ta­ti­ves in the Board: (a) have the rele­vant com­pe­ten­ces and powers in their Mem­ber Sta­te so as to con­tri­bu­te actively to the achie­ve­ment of the Board’s tasks refer­red to in Artic­le 58; (b) are desi­gna­ted as a sin­gle cont­act point vis-à-vis the Board and, whe­re appro­pria­te, taking into account Mem­ber Sta­tes’ needs, as a sin­gle cont­act point for stake­hol­ders; (c) are empowered to faci­li­ta­te con­si­sten­cy and coor­di­na­ti­on bet­ween natio­nal com­pe­tent aut­ho­ri­ties in their Mem­ber Sta­te as regards the imple­men­ta­ti­on of this Regu­la­ti­on, inclu­ding through the coll­ec­tion of rele­vant data and infor­ma­ti­on for the pur­po­se of ful­fil­ling their tasks on the Board. 3. The desi­gna­ted repre­sen­ta­ti­ves of the Mem­ber Sta­tes shall adopt the Board’s rules of pro­ce­du­re by a two-thirds majo­ri­ty. The rules of pro­ce­du­re shall, in par­ti­cu­lar, lay down pro­ce­du­res for the sel­ec­tion pro­cess, dura­ti­on of man­da­te and spe­ci­fi­ca­ti­ons of the tasks of the Chair, the voting moda­li­ties, and the orga­ni­sa­ti­on of the Board’s acti­vi­ties and its sub- groups. 3a. The Board shall estab­lish two stan­ding sub-groups to pro­vi­de a plat­form for coope­ra­ti­on and exch­an­ge among mar­ket sur­veil­lan­ce aut­ho­ri­ties and noti­fy­ing aut­ho­ri­ties on issues rela­ted to mar­ket sur­veil­lan­ce and noti­fi­ed bodies respec­tively. The stan­ding sub-group for mar­ket sur­veil­lan­ce should act as the Admi­ni­stra­ti­ve Coope­ra­ti­on Group (ADCO) for this Regu­la­ti­on in the mea­ning of Artic­le 30 of Regu­la­ti­on (EU) 2019/1020. The Board may estab­lish other stan­ding or tem­po­ra­ry sub-groups as appro­pria­te for the pur­po­se of exami­ning spe­ci­fic issues. Whe­re appro­pria­te, repre­sen­ta­ti­ves of the advi­so­ry forum as refer­red to in Artic­le 58a may be invi­ted to such sub-groups or to spe­ci­fic mee­tings of tho­se sub­groups in the capa­ci­ty of obser­vers. 3b. The Board shall be orga­ni­s­ed and ope­ra­ted so as to safe­guard the objec­ti­vi­ty and impar­tia­li­ty of its acti­vi­ties. 4. The Board shall be chai­red by one of the repre­sen­ta­ti­ves of the Mem­ber Sta­tes. The Euro­pean AI Office shall pro­vi­de the Secre­ta­ri­at for the Board, con­ve­ne the mee­tings upon request of the Chair and prepa­re the agen­da in accordance with the tasks of the Board pur­su­ant to this Regu­la­ti­on and its rules of procedure. 

Artic­le 58 – Tasks of the Board

The Board shall advi­se and assist the Com­mis­si­on and the Mem­ber Sta­tes in order to faci­li­ta­te the con­si­stent and effec­ti­ve appli­ca­ti­on of this Regu­la­ti­on. For this pur­po­se the Board may in par­ti­cu­lar: (a) con­tri­bu­te to the coor­di­na­ti­on among natio­nal com­pe­tent aut­ho­ri­ties respon­si­ble for the appli­ca­ti­on of this Regu­la­ti­on and, in coope­ra­ti­on and sub­ject to agree­ment of the con­cer­ned mar­ket sur­veil­lan­ce aut­ho­ri­ties, sup­port joint acti­vi­ties of mar­ket sur­veil­lan­ce aut­ho­ri­ties refer­red to in Artic­le 63(7a); (b) coll­ect and share tech­ni­cal and regu­la­to­ry exper­ti­se and best prac­ti­ces among Mem­ber Sta­tes; (c) pro­vi­de advice in the imple­men­ta­ti­on of this Regu­la­ti­on, in par­ti­cu­lar as regards the enforce­ment of rules on gene­ral pur­po­se AI models; (d) con­tri­bu­te to the har­mo­ni­sa­ti­on of admi­ni­stra­ti­ve prac­ti­ces in the Mem­ber Sta­tes, inclu­ding in rela­ti­on to the dero­ga­ti­on from the con­for­mi­ty assess­ment pro­ce­du­res refer­red to in Artic­le 47, the func­tio­ning of regu­la­to­ry sand­bo­xes and test­ing in real world con­di­ti­ons refer­red to in Artic­les 53, 54 and 54a; (e) upon the request of the Com­mis­si­on or on its own initia­ti­ve, issue recom­men­da­ti­ons and writ­ten opi­ni­ons on any rele­vant mat­ters rela­ted to the imple­men­ta­ti­on of this Regu­la­ti­on and to its con­si­stent and effec­ti­ve appli­ca­ti­on, inclu­ding: (i) on the deve­lo­p­ment and appli­ca­ti­on of codes of con­duct and code of prac­ti­ce pur­su­ant to this Regu­la­ti­on as well as the Commission’s gui­de­lines; (ii) the eva­lua­ti­on and review of this Regu­la­ti­on pur­su­ant to Artic­le 84, inclu­ding as regards the serious inci­dent reports refer­red to in Artic­le 62 and the func­tio­ning of the data­ba­se refer­red to in Artic­le 60, the pre­pa­ra­ti­on of the dele­ga­ted or imple­men­ting acts, and pos­si­ble ali­gnments of this Regu­la­ti­on with the legal acts listed in Annex II; (iii) on tech­ni­cal spe­ci­fi­ca­ti­ons or exi­sting stan­dards regar­ding the requi­re­ments set out in Tit­le III, Chap­ter 2; (iv) on the use of har­mo­ni­s­ed stan­dards or com­mon spe­ci­fi­ca­ti­ons refer­red to in Artic­les 40 and 41; (v) trends, such as Euro­pean glo­bal com­pe­ti­ti­ve­ness in arti­fi­ci­al intel­li­gence, the upt­ake of arti­fi­ci­al intel­li­gence in the Uni­on and the deve­lo­p­ment of digi­tal skills; (via) trends on the evol­ving typo­lo­gy of AI value chains, in par­ti­cu­lar on the resul­ting impli­ca­ti­ons in terms of accoun­ta­bi­li­ty; (vi) on the poten­ti­al need for amend­ment of Annex III in accordance with Artic­le 7 and on the poten­ti­al need for pos­si­ble revi­si­on of Artic­le 5 pur­su­ant to Artic­le 84, taking into account rele­vant available evi­dence and the latest deve­lo­p­ments in tech­no­lo­gy; (f) sup­port the Com­mis­si­on in pro­mo­ting AI liter­a­cy, public awa­re­ness and under­stan­ding of the bene­fits, risks, safe­guards and rights and obli­ga­ti­ons in rela­ti­on to the use of AI systems; (g) faci­li­ta­te the deve­lo­p­ment of com­mon cri­te­ria and a shared under­stan­ding among mar­ket ope­ra­tors and com­pe­tent aut­ho­ri­ties of the rele­vant con­cepts pro­vi­ded for in this Regu­la­ti­on, inclu­ding by con­tri­bu­ting to the deve­lo­p­ment of bench­marks; (h) coope­ra­te, as appro­pria­te, with other Uni­on insti­tu­ti­ons, bodies, offices and agen­ci­es, as well as rele­vant Uni­on expert groups and net­works in par­ti­cu­lar in the fields of pro­duct safe­ty, cyber­se­cu­ri­ty, com­pe­ti­ti­on, digi­tal and media ser­vices, finan­cial ser­vices, con­su­mer pro­tec­tion, data and fun­da­men­tal rights pro­tec­tion; (i) con­tri­bu­te to the effec­ti­ve coope­ra­ti­on with the com­pe­tent aut­ho­ri­ties of third count­ries and with inter­na­tio­nal orga­ni­sa­ti­ons; (j) assist natio­nal com­pe­tent aut­ho­ri­ties and the Com­mis­si­on, in deve­lo­ping the orga­ni­sa­tio­nal and tech­ni­cal exper­ti­se requi­red for the imple­men­ta­ti­on of this Regu­la­ti­on, inclu­ding by con­tri­bu­ting to the assess­ment of trai­ning needs for staff of Mem­ber Sta­tes invol­ved in imple­men­ting this Regu­la­ti­on; (j1) assist the AI Office in sup­port­ing natio­nal com­pe­tent aut­ho­ri­ties in the estab­lish­ment and deve­lo­p­ment of regu­la­to­ry sand­bo­xes and faci­li­ta­te coope­ra­ti­on and infor­ma­ti­on- sha­ring among regu­la­to­ry sand­bo­xes; (k) con­tri­bu­te and pro­vi­de rele­vant advice in the deve­lo­p­ment of gui­dance docu­ments; (l) advi­se the Com­mis­si­on in rela­ti­on to inter­na­tio­nal mat­ters on arti­fi­ci­al intel­li­gence; (m) pro­vi­de opi­ni­ons to the Com­mis­si­on on the qua­li­fi­ed alerts regar­ding gene­ral pur­po­se AI models; (n) recei­ve opi­ni­ons by the Mem­ber sta­tes on the qua­li­fi­ed alerts regar­ding gene­ral pur­po­se AI models and on natio­nal expe­ri­en­ces and prac­ti­ces on the moni­to­ring and enforce­ment of the AI systems, in par­ti­cu­lar systems inte­gra­ting the gene­ral pur­po­se AI models. 

Artic­le 58a – Advi­so­ry forum

1. An advi­so­ry forum shall be estab­lished to advi­se and pro­vi­de tech­ni­cal exper­ti­se to the Board and the Com­mis­si­on to con­tri­bu­te to their tasks under this Regu­la­ti­on. 2. The mem­ber­ship of the advi­so­ry forum shall repre­sent a balan­ced sel­ec­tion of stake­hol­ders, inclu­ding indu­stry, start-ups, SMEs, civil socie­ty and aca­de­mia. The mem­ber­ship of the advi­so­ry forum shall be balan­ced with regard to com­mer­cial and non- com­mer­cial inte­rests and, within the cate­go­ry of com­mer­cial inte­rests, with regards to SMEs and other under­ta­kings. 3. The Com­mis­si­on shall appoint the mem­bers of the advi­so­ry forum, in accordance with the cri­te­ria set out in the pre­vious para­graph, among stake­hol­ders with reco­g­nis­ed exper­ti­se in the field of AI. 4. The term of office of the mem­bers of the advi­so­ry forum shall be two years, which may be exten­ded by up to no more than four years. 5. The Fun­da­men­tal Rights Agen­cy, Euro­pean Uni­on Agen­cy for Cyber­se­cu­ri­ty, the Euro­pean Com­mit­tee for Stan­dar­dizati­on (CEN), the Euro­pean Com­mit­tee for Elec­tro­tech­ni­cal Stan­dar­dizati­on (CENELEC), and the Euro­pean Tele­com­mu­ni­ca­ti­ons Stan­dards Insti­tu­te (ETSI) shall be per­ma­nent mem­bers of the advi­so­ry forum. 6. The advi­so­ry forum shall draw up its rules of pro­ce­du­re. It shall elect two co-chairs from among its mem­bers, in accordance with cri­te­ria set out in para­graph 2. The term of office of the co-chairs shall be two years, rene­wa­ble once. 7. The advi­so­ry forum shall hold mee­tings at least two times a year. The advi­so­ry forum may invi­te experts and other stake­hol­ders to its mee­tings. 8. In ful­fil­ling its role as set out in para­graph 1, the advi­so­ry forum may prepa­re opi­ni­ons, recom­men­da­ti­ons and writ­ten con­tri­bu­ti­ons upon request of the Board or the Com­mis­si­on. 9. The advi­so­ry forum may estab­lish stan­ding or tem­po­ra­ry sub­groups as appro­pria­te for the pur­po­se of exami­ning spe­ci­fic que­sti­ons rela­ted to the objec­ti­ves of this Regu­la­ti­on. 10. The advi­so­ry forum shall prepa­re an annu­al report of its acti­vi­ties. That report shall be made publicly available.

Chap­ter 1a SCIENTIFIC PANEL OF INDEPENDENT EXPERTS

Artic­le 58b – Sci­en­ti­fic panel of inde­pen­dent experts

1. The Com­mis­si­on shall, by means of an imple­men­ting act, make pro­vi­si­ons on the estab­lish­ment of a sci­en­ti­fic panel of inde­pen­dent experts (the ‘sci­en­ti­fic panel’) inten­ded to sup­port the enforce­ment acti­vi­ties under this Regu­la­ti­on. Tho­se imple­men­ting acts shall be adopted in accordance with the exami­na­ti­on pro­ce­du­re refer­red to in Artic­le 74(2). 2. The sci­en­ti­fic panel shall con­sist of experts sel­ec­ted by the Com­mis­si­on on the basis of up- to-date sci­en­ti­fic or tech­ni­cal exper­ti­se in the field of arti­fi­ci­al intel­li­gence neces­sa­ry for the tasks set out in para­graph 3, and shall be able to demon­stra­te mee­ting all of the fol­lo­wing con­di­ti­ons: (a) par­ti­cu­lar exper­ti­se and com­pe­tence and sci­en­ti­fic or tech­ni­cal exper­ti­se in the field of arti­fi­ci­al intel­li­gence; (b) inde­pen­dence from any pro­vi­der of AI systems or gene­ral pur­po­se AI models or systems; (c) abili­ty to car­ry out acti­vi­ties dili­gent­ly, accu­ra­te­ly and objec­tively. The Com­mis­si­on, in con­sul­ta­ti­on with the AI Board, shall deter­mi­ne the num­ber of experts in the panel in accordance with the requi­red needs and shall ensu­re fair gen­der and geo­gra­phi­cal repre­sen­ta­ti­on. 3. The sci­en­ti­fic panel shall advi­se and sup­port the Euro­pean AI Office, in par­ti­cu­lar with regard to the fol­lo­wing tasks: (a) sup­port the imple­men­ta­ti­on and enforce­ment of this Regu­la­ti­on as regards gene­ral pur­po­se AI models and systems, in par­ti­cu­lar by (i) aler­ting the AI Office of pos­si­ble syste­mic risks at Uni­on level of gene­ral pur­po­se AI models, in accordance with Artic­le 68h [Alerts of syste­mic risks by the sci­en­ti­fic panel]; (ii) con­tri­bu­ting to the deve­lo­p­ment of tools and metho­do­lo­gies for eva­lua­ting capa­bi­li­ties of gene­ral pur­po­se AI models and systems, inclu­ding through bench­marks; (iii) pro­vi­ding advice on the clas­si­fi­ca­ti­on of gene­ral pur­po­se AI models with syste­mic risk; (iv) pro­vi­ding advice on the clas­si­fi­ca­ti­on of dif­fe­rent gene­ral pur­po­se AI models and systems; (iv) con­tri­bu­ting to the deve­lo­p­ment of tools and tem­pla­tes; (b) sup­port the work of mar­ket sur­veil­lan­ce aut­ho­ri­ties, at their request; (c) sup­port cross-bor­der mar­ket sur­veil­lan­ce acti­vi­ties as refer­red to in Artic­le 63(7a), wit­hout pre­ju­di­ce of the powers of mar­ket sur­veil­lan­ce aut­ho­ri­ties; (d) sup­port the AI Office when car­ry­ing out its duties in the con­text of the safe­guard clau­se pur­su­ant to Artic­le 66. 4. The experts shall per­form their tasks with impar­tia­li­ty, objec­ti­vi­ty and ensu­re the con­fi­den­tia­li­ty of infor­ma­ti­on and data obtai­ned in car­ry­ing out their tasks and acti­vi­ties. They shall neither seek nor take ins­truc­tions from anyo­ne when exer­cis­ing their tasks under para­graph 3. Each expert shall draw up a decla­ra­ti­on of inte­rests, which shall be made publicly available. The AI Office shall estab­lish systems and pro­ce­du­res to actively mana­ge and pre­vent poten­ti­al con­flicts of inte­rest. 5. The imple­men­ting act refer­red to in para­graph 1 shall include pro­vi­si­ons on the con­di­ti­ons, pro­ce­du­re and moda­li­ties for the sci­en­ti­fic panel and its mem­bers to issue alerts and request the assi­stance of the AI Office to the per­for­mance of its tasks. 

Artic­le 58c – Access to the pool of experts by the Mem­ber States

1. Mem­ber Sta­tes may call upon experts of the sci­en­ti­fic panel to sup­port their enforce­ment acti­vi­ties under this Regu­la­ti­on. 2. The Mem­ber Sta­tes may be requi­red to pay fees for the advice and sup­port by the experts. The struc­tu­re and the level of fees as well as the sca­le and struc­tu­re of reco­vera­ble costs shall be set out in the imple­men­ting act refer­red to in Artic­le 58b(1), taking into account the objec­ti­ves of the ade­qua­te imple­men­ta­ti­on of this Regu­la­ti­on, cost-effec­ti­ve­ness and the neces­si­ty to ensu­re an effec­ti­ve access to experts by all Mem­ber Sta­tes. 3. The Com­mis­si­on shall faci­li­ta­te time­ly access to the experts by the Mem­ber Sta­tes, as nee­ded, and ensu­re that the com­bi­na­ti­on of sup­port acti­vi­ties car­ri­ed out by EU AI test­ing sup­port pur­su­ant to Artic­le 68a and experts pur­su­ant to this Artic­le is effi­ci­ent­ly orga­ni­s­ed and pro­vi­des the best pos­si­ble added value. 

Chap­ter 2 NATIONAL COMPETENT AUTHORITIES

Artic­le 59 – Desi­gna­ti­on of natio­nal com­pe­tent aut­ho­ri­ties and sin­gle point of contact

2. Each Mem­ber Sta­te shall estab­lish or desi­gna­te at least one noti­fy­ing aut­ho­ri­ty and at least one mar­ket sur­veil­lan­ce aut­ho­ri­ty for the pur­po­se of this Regu­la­ti­on as natio­nal com­pe­tent aut­ho­ri­ties. The­se natio­nal com­pe­tent aut­ho­ri­ties shall exer­cise their powers inde­pendent­ly, impar­ti­al­ly and wit­hout bias so as to safe­guard the prin­ci­ples of objec­ti­vi­ty of their acti­vi­ties and tasks and to ensu­re the appli­ca­ti­on and imple­men­ta­ti­on of this Regu­la­ti­on. The mem­bers of the­se aut­ho­ri­ties shall refrain from any action incom­pa­ti­ble with their duties. Pro­vi­ded that tho­se prin­ci­ples are respec­ted, such acti­vi­ties and tasks may be per­for­med by one or seve­ral desi­gna­ted aut­ho­ri­ties, in accordance with the orga­ni­sa­tio­nal needs of the Mem­ber Sta­te. 3. Mem­ber Sta­tes shall com­mu­ni­ca­te to the Com­mis­si­on the iden­ti­ty of the noti­fy­ing aut­ho­ri­ties and the mar­ket sur­veil­lan­ce aut­ho­ri­ties and the tasks of tho­se aut­ho­ri­ties and as well as any sub­se­quent chan­ges the­re­to. Mem­ber Sta­tes shall make publicly available infor­ma­ti­on on how com­pe­tent aut­ho­ri­ties and sin­gle point of cont­act can be cont­ac­ted, through elec­tro­nic com­mu­ni­ca­ti­on means by… [12 months after the date of ent­ry into force of this Regu­la­ti­on]. Mem­ber Sta­tes shall desi­gna­te a mar­ket sur­veil­lan­ce aut­ho­ri­ty to act as sin­gle point of cont­act for this Regu­la­ti­on and noti­fy the Com­mis­si­on of the iden­ti­ty of the sin­gle point of cont­act. The Com­mis­si­on shall make a list of the sin­gle points of cont­act publicly available. 4. Mem­ber Sta­tes shall ensu­re that the natio­nal com­pe­tent aut­ho­ri­ty is pro­vi­ded with ade­qua­te tech­ni­cal, finan­cial and human resour­ces, and infras­truc­tu­re to ful­fil their tasks effec­tively under this Regu­la­ti­on. In par­ti­cu­lar, the natio­nal com­pe­tent aut­ho­ri­ty shall have a suf­fi­ci­ent num­ber of per­son­nel per­ma­nent­ly available who­se com­pe­ten­ces and exper­ti­se shall include an in-depth under­stan­ding of arti­fi­ci­al intel­li­gence tech­no­lo­gies, data and data com­pu­ting, per­so­nal data pro­tec­tion, cyber­se­cu­ri­ty, fun­da­men­tal rights, health and safe­ty risks and know­ledge of exi­sting stan­dards and legal requi­re­ments. Mem­ber Sta­tes shall assess and, if dee­med neces­sa­ry, update com­pe­tence and resour­ce requi­re­ments refer­red to in this para­graph on an annu­al basis. 4a. Natio­nal com­pe­tent aut­ho­ri­ties shall satis­fy an ade­qua­te level of cyber­se­cu­ri­ty mea­su­res. 4c. When per­forming their tasks, the natio­nal com­pe­tent aut­ho­ri­ties shall act in com­pli­ance with the con­fi­den­tia­li­ty obli­ga­ti­ons set out in Artic­le 70. 5. By one year after ent­ry into force of this Regu­la­ti­on and once every two years the­re­af­ter Mem­ber Sta­tes shall report to the Com­mis­si­on on the sta­tus of the finan­cial and human resour­ces of the natio­nal com­pe­tent aut­ho­ri­ties with an assess­ment of their ade­qua­cy. The Com­mis­si­on shall trans­mit that infor­ma­ti­on to the Board for dis­cus­sion and pos­si­ble recom­men­da­ti­ons. 6. The Com­mis­si­on shall faci­li­ta­te the exch­an­ge of expe­ri­ence bet­ween natio­nal com­pe­tent aut­ho­ri­ties. 7. Natio­nal com­pe­tent aut­ho­ri­ties may pro­vi­de gui­dance and advice on the imple­men­ta­ti­on of this Regu­la­ti­on, in par­ti­cu­lar to SMEs inclu­ding start-ups, taking into account the Board’s and the Commission’s gui­dance and advice, as appro­pria­te. When­ever natio­nal com­pe­tent aut­ho­ri­ties intend to pro­vi­de gui­dance and advice with regard to an AI system in are­as cover­ed by other Uni­on legis­la­ti­on, the com­pe­tent natio­nal aut­ho­ri­ties under that Uni­on legis­la­ti­on shall be con­sul­ted, as appro­pria­te. 8. When Uni­on insti­tu­ti­ons, agen­ci­es and bodies fall within the scope of this Regu­la­ti­on, the Euro­pean Data Pro­tec­tion Super­vi­sor shall act as the com­pe­tent aut­ho­ri­ty for their supervision. 

TITLE VII EU DATABASE FOR HIGH-RISK AI SYSTEMS LISTED IN ANNEX III

Artic­le 60 – EU data­ba­se for high-risk AI systems listed in Annex III

1. The Com­mis­si­on shall, in col­la­bo­ra­ti­on with the Mem­ber Sta­tes, set up and main­tain a EU data­ba­se con­tai­ning infor­ma­ti­on refer­red to in para­graphs 2 and 2a con­cer­ning high-risk AI systems refer­red to in Artic­le 6(2) which are regi­stered in accordance with Artic­les 51 and 54a. When set­ting the func­tion­al spe­ci­fi­ca­ti­ons of such data­ba­se, the Com­mis­si­on shall con­sult the rele­vant experts, and when updating the func­tion­al spe­ci­fi­ca­ti­ons of such data­ba­se, the Com­mis­si­on shall con­sult the AI Board. 2. The data listed in Annex VIII, Sec­tion A, shall be ente­red into the EU data­ba­se by the pro­vi­der or whe­re appli­ca­ble the aut­ho­ri­sed repre­sen­ta­ti­ve. 2a. The data listed in Annex VIII, Sec­tion B, shall be ente­red into the EU data­ba­se by the deployer who is or who acts on behalf of public aut­ho­ri­ties, agen­ci­es or bodies, accor­ding to artic­les 51(1a) and (1b). 3. With the excep­ti­on of the sec­tion refer­red to in Artic­le 51(1c) and Artic­le 54a(5), the infor­ma­ti­on con­tai­ned in the EU data­ba­se regi­stered in accordance with Artic­le 51 shall be acce­s­si­ble and publicly available in a user fri­end­ly man­ner. The infor­ma­ti­on should be easi­ly navigab­le and machi­ne-rea­da­ble. The infor­ma­ti­on regi­stered in accordance with Artic­le 54a shall be acce­s­si­ble only to mar­ket sur­veil­lan­ce aut­ho­ri­ties and the Com­mis­si­on, unless the pro­s­pec­ti­ve pro­vi­der or pro­vi­der has given con­sent for making this infor­ma­ti­on also acce­s­si­ble the public. 4. The EU data­ba­se shall con­tain per­so­nal data only inso­far as neces­sa­ry for coll­ec­ting and pro­ce­s­sing infor­ma­ti­on in accordance with this Regu­la­ti­on. That infor­ma­ti­on shall include the names and cont­act details of natu­ral per­sons who are respon­si­ble for regi­stering the system and have the legal aut­ho­ri­ty to repre­sent the pro­vi­der or the deployer, as appli­ca­ble. 5. The Com­mis­si­on shall be the con­trol­ler of the EU data­ba­se. It shall make available to pro­vi­ders, pro­s­pec­ti­ve pro­vi­ders and deployers ade­qua­te tech­ni­cal and admi­ni­stra­ti­ve sup­port. The data­ba­se shall com­ply with the appli­ca­ble acce­s­si­bi­li­ty requirements. 

TITLE VIII POST-MARKET MONITORING, INFORMATION SHARING, MARKET SURVEILLANCE

Chap­ter 1 POST-MARKET MONITORING

Artic­le 61 – Post-mar­ket moni­to­ring by pro­vi­ders and post-mar­ket moni­to­ring plan for high-risk AI systems

1. Pro­vi­ders shall estab­lish and docu­ment a post-mar­ket moni­to­ring system in a man­ner that is pro­por­tio­na­te to the natu­re of the arti­fi­ci­al intel­li­gence tech­no­lo­gies and the risks of the high-risk AI system. 2. The post-mar­ket moni­to­ring system shall actively and syste­ma­ti­cal­ly coll­ect, docu­ment and ana­ly­se rele­vant data which may be pro­vi­ded by deployers or which may be coll­ec­ted through other sources on the per­for­mance of high-risk AI systems throug­hout their life­time, and allow the pro­vi­der to eva­lua­te the con­ti­nuous com­pli­ance of AI systems with the requi­re­ments set out in Tit­le III, Chap­ter 2. Whe­re rele­vant, post-mar­ket moni­to­ring shall include an ana­ly­sis of the inter­ac­tion with other AI systems. This obli­ga­ti­on shall not cover sen­si­ti­ve ope­ra­tio­nal data of deployers which are law enforce­ment aut­ho­ri­ties. 3. The post-mar­ket moni­to­ring system shall be based on a post-mar­ket moni­to­ring plan. The post-mar­ket moni­to­ring plan shall be part of the tech­ni­cal docu­men­ta­ti­on refer­red to in Annex IV. The Com­mis­si­on shall adopt an imple­men­ting act lay­ing down detail­ed pro­vi­si­ons estab­li­shing a tem­p­la­te for the post-mar­ket moni­to­ring plan and the list of ele­ments to be inclu­ded in the plan by six months befo­re the ent­ry into appli­ca­ti­on of this Regu­la­ti­on. 4. For high-risk AI systems cover­ed by the legal acts refer­red to in Annex II, Sec­tion A, whe­re a post-mar­ket moni­to­ring system and plan is alre­a­dy estab­lished under that legis­la­ti­on, in order to ensu­re con­si­sten­cy, avo­id dupli­ca­ti­ons and mini­mi­se addi­tio­nal bur­dens, pro­vi­ders shall have a choice to inte­gra­te, as appro­pria­te, the neces­sa­ry ele­ments descri­bed in para­graphs 1, 2 and 3 using the tem­p­la­te refer­red in para­graph 3 into alre­a­dy exi­sting system and plan under the Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Annex II, Sec­tion A, pro­vi­ded it achie­ves an equi­va­lent level of pro­tec­tion. The first sub­pa­ra­graph shall also app­ly high-risk AI systems refer­red to in point 5 of Annex III pla­ced on the mar­ket or put into ser­vice by finan­cial insti­tu­ti­ons that are sub­ject to requi­re­ments regar­ding their inter­nal gover­nan­ce, arran­ge­ments or pro­ce­s­ses under Uni­on finan­cial ser­vices legislation. 

Chap­ter 2 SHARING OF INFORMATION ON SERIOUS INCIDENTS

Artic­le 62 – Report­ing of serious incidents

1. Pro­vi­ders of high-risk AI systems pla­ced on the Uni­on mar­ket shall report any serious inci­dent to the mar­ket sur­veil­lan­ce aut­ho­ri­ties of the Mem­ber Sta­tes whe­re that inci­dent occur­red. 1a. As a gene­ral rule, the peri­od for the report­ing refer­red to in para­graph 1 shall take account of the seve­ri­ty of the serious inci­dent. 1b. The noti­fi­ca­ti­on refer­red to in para­graph 1 shall be made imme­dia­te­ly after the pro­vi­der has estab­lished a cau­sal link bet­ween the AI system and the serious inci­dent or the rea­sonable likeli­hood of such a link, and, in any event, not later than 15 days after the pro­vi­der or, whe­re appli­ca­ble, the deployer, beco­mes awa­re of the serious inci­dent. 1c. Not­wi­th­stan­ding para­graph 1b, in the event of a wide­spread inf­rin­ge­ment or a serious inci­dent as defi­ned in Artic­le 3(44) point (b) the report refer­red to in para­graph 1 shall be pro­vi­ded imme­dia­te­ly, and not later than 2 days after the pro­vi­der or, whe­re appli­ca­ble, the deployer beco­mes awa­re of that inci­dent. 1d. Not­wi­th­stan­ding para­graph 1b, in the event of death of a per­son the report shall be pro­vi­ded imme­dia­te­ly after the pro­vi­der or the deployer has estab­lished or as soon as it suspects a cau­sal rela­ti­on­ship bet­ween the high-risk AI system and the serious inci­dent but not later than 10 days after the date on which the pro­vi­der or, whe­re appli­ca­ble, the deployer beco­mes awa­re of the serious inci­dent. 1e. Whe­re neces­sa­ry to ensu­re time­ly report­ing, the pro­vi­der or, whe­re appli­ca­ble, the deployer, may sub­mit an initi­al report that is incom­ple­te fol­lo­wed up by a com­ple­te report. 1a. Fol­lo­wing the report­ing of a serious inci­dent pur­su­ant to the first sub­pa­ra­graph, the pro­vi­der shall, wit­hout delay, per­form the neces­sa­ry inve­sti­ga­ti­ons in rela­ti­on to the serious inci­dent and the AI system con­cer­ned. This shall include a risk assess­ment of the inci­dent and cor­rec­ti­ve action. The pro­vi­der shall co-ope­ra­te with the com­pe­tent aut­ho­ri­ties and whe­re rele­vant with the noti­fi­ed body con­cer­ned during the inve­sti­ga­ti­ons refer­red to in the first sub­pa­ra­graph and shall not per­form any inve­sti­ga­ti­on which invol­ves alte­ring the AI system con­cer­ned in a way which may affect any sub­se­quent eva­lua­ti­on of the cau­ses of the inci­dent, pri­or to informing the com­pe­tent aut­ho­ri­ties of such action. 2. Upon recei­ving a noti­fi­ca­ti­on rela­ted to a serious inci­dent refer­red to in Artic­le 3(44)(c), the rele­vant mar­ket sur­veil­lan­ce aut­ho­ri­ty shall inform the natio­nal public aut­ho­ri­ties or bodies refer­red to in Artic­le 64(3). The Com­mis­si­on shall deve­lop dedi­ca­ted gui­dance to faci­li­ta­te com­pli­ance with the obli­ga­ti­ons set out in para­graph 1. That gui­dance shall be issued 12 months after the ent­ry into force of this Regu­la­ti­on, at the latest, and shall be asses­sed regu­lar­ly. 2a. The mar­ket sur­veil­lan­ce aut­ho­ri­ty shall take appro­pria­te mea­su­res, as pro­vi­ded in Artic­le 19 of the Regu­la­ti­on 2019/1020, within 7 days from the date it recei­ved the noti­fi­ca­ti­on refer­red to in para­graph 1 and fol­low the noti­fi­ca­ti­on pro­ce­du­res as pro­vi­ded in the Regu­la­ti­on 2019/1020. 3. For high-risk AI systems refer­red to in Annex III that are pla­ced on the mar­ket or put into ser­vice by pro­vi­ders that are sub­ject to Uni­on legis­la­ti­ve instru­ments lay­ing down report­ing obli­ga­ti­ons equi­va­lent to tho­se set out in this Regu­la­ti­on, the noti­fi­ca­ti­on of serious inci­dents shall be limi­t­ed to tho­se refer­red to in Artic­le 3(44)(c). 3a. For high-risk AI systems which are safe­ty com­pon­ents of devices, or are them­sel­ves devices, cover­ed by Regu­la­ti­on (EU) 2017/745 and Regu­la­ti­on (EU) 2017/746 the noti­fi­ca­ti­on of serious inci­dents shall be limi­t­ed to tho­se refer­red to in Artic­le 3(44)(c) and be made to the natio­nal com­pe­tent aut­ho­ri­ty cho­sen for this pur­po­se by the Mem­ber Sta­tes whe­re that inci­dent occur­red. 3a. Natio­nal com­pe­tent aut­ho­ri­ties shall imme­dia­te­ly noti­fy the Com­mis­si­on of any serious inci­dent, whe­ther or not it has taken action on it, in accordance with Artic­le 20 of Regu­la­ti­on 2019/1020.

Chap­ter 3 ENFORCEMENT

Artic­le 63 – Mar­ket sur­veil­lan­ce and con­trol of AI systems in the Uni­on market

1. Regu­la­ti­on (EU) 2019/1020 shall app­ly to AI systems cover­ed by this Regu­la­ti­on. Howe­ver, for the pur­po­se of the effec­ti­ve enforce­ment of this Regu­la­ti­on: (a) any refe­rence to an eco­no­mic ope­ra­tor under Regu­la­ti­on (EU) 2019/1020 shall be under­s­tood as inclu­ding all ope­ra­tors iden­ti­fi­ed in Artic­le 2(1) of this Regu­la­ti­on; (b) any refe­rence to a pro­duct under Regu­la­ti­on (EU) 2019/1020 shall be under­s­tood as inclu­ding all AI systems fal­ling within the scope of this Regu­la­ti­on. 2. As part of their report­ing obli­ga­ti­ons under Artic­le 34(4) of Regu­la­ti­on (EU) 2019/1020, the mar­ket sur­veil­lan­ce aut­ho­ri­ties shall report annu­al­ly, to the Com­mis­si­on and rele­vant natio­nal com­pe­ti­ti­on aut­ho­ri­ties any infor­ma­ti­on iden­ti­fi­ed in the cour­se of mar­ket sur­veil­lan­ce acti­vi­ties that may be of poten­ti­al inte­rest for the appli­ca­ti­on of Uni­on law on com­pe­ti­ti­on rules. They shall also annu­al­ly report to the Com­mis­si­on about the use of pro­hi­bi­ted prac­ti­ces that occur­red during that year and about the mea­su­res taken. 3. For high-risk AI systems, rela­ted to pro­ducts to which legal acts listed in Annex II, sec­tion A app­ly, the mar­ket sur­veil­lan­ce aut­ho­ri­ty for the pur­po­ses of this Regu­la­ti­on shall be the aut­ho­ri­ty respon­si­ble for mar­ket sur­veil­lan­ce acti­vi­ties desi­gna­ted under tho­se legal acts. By dero­ga­ti­on from the pre­vious para­graph in justi­fi­ed cir­cum­stances, Mem­ber Sta­tes may desi­gna­te ano­ther rele­vant aut­ho­ri­ty to act as a mar­ket sur­veil­lan­ce aut­ho­ri­ty pro­vi­ded that coor­di­na­ti­on is ensu­red with the rele­vant sec­to­ral mar­ket sur­veil­lan­ce aut­ho­ri­ties respon­si­ble for the enforce­ment of the legal acts listed in Annex II. 3a. The pro­ce­du­res refer­red to in Artic­les 65, 66, 67 and 68 of this Regu­la­ti­on shall not app­ly to AI systems rela­ted to pro­ducts, to which legal acts listed in Annex II, sec­tion A app­ly, when such legal acts alre­a­dy pro­vi­de for pro­ce­du­res ensu­ring an equi­va­lent level of pro­tec­tion and having the same objec­ti­ve. In such a case, the­se sec­to­ral pro­ce­du­res shall app­ly instead. 3b. Wit­hout pre­ju­di­ce to the powers of mar­ket sur­veil­lan­ce aut­ho­ri­ties under Artic­le 14 of Regu­la­ti­on 2019/1020, for the pur­po­se of ensu­ring the effec­ti­ve enforce­ment of this Regu­la­ti­on, mar­ket sur­veil­lan­ce aut­ho­ri­ties may exer­cise the powers refer­red to in Artic­le 14(4)(d) and (j) of Regu­la­ti­on 2019/1020 remo­te­ly as appro­pria­te. 4. For high-risk AI systems pla­ced on the mar­ket, put into ser­vice or used by finan­cial insti­tu­ti­ons regu­la­ted by Uni­on legis­la­ti­on on finan­cial ser­vices, the mar­ket sur­veil­lan­ce aut­ho­ri­ty for the pur­po­ses of this Regu­la­ti­on shall be the rele­vant natio­nal aut­ho­ri­ty respon­si­ble for the finan­cial super­vi­si­on of tho­se insti­tu­ti­ons under that legis­la­ti­on in so far as the pla­ce­ment on the mar­ket, put­ting into ser­vice or the use of the AI system is in direct con­nec­tion with the pro­vi­si­on of tho­se finan­cial ser­vices. 4a. By way of a dero­ga­ti­on from the pre­vious sub­pa­ra­graph, in justi­fi­ed cir­cum­stances and pro­vi­ded that coor­di­na­ti­on is ensu­red, ano­ther rele­vant aut­ho­ri­ty may be iden­ti­fi­ed by the Mem­ber Sta­te as mar­ket sur­veil­lan­ce aut­ho­ri­ty for the pur­po­ses of this Regu­la­ti­on. Natio­nal mar­ket sur­veil­lan­ce aut­ho­ri­ties super­vi­sing regu­la­ted cre­dit insti­tu­ti­ons regu­la­ted under Direc­ti­ve 2013/36/EU, which are par­ti­ci­pa­ting in the Sin­gle Super­vi­so­ry Mecha­nism (SSM) estab­lished by Coun­cil Regu­la­ti­on No 1204/2013, should report, wit­hout delay, to the Euro­pean Cen­tral Bank any infor­ma­ti­on iden­ti­fi­ed in the cour­se of their mar­ket sur­veil­lan­ce acti­vi­ties that may be of poten­ti­al inte­rest for the Euro­pean Cen­tral Bank’s pru­den­ti­al super­vi­so­ry tasks as spe­ci­fi­ed in that Regu­la­ti­on. 5. For high-risk AI systems listed in point 1 in so far as the systems are used for law enforce­ment pur­po­ses and for pur­po­ses listed in points 6, 7 and 8 of Annex III, Mem­ber Sta­tes shall desi­gna­te as mar­ket sur­veil­lan­ce aut­ho­ri­ties for the pur­po­ses of this Regu­la­ti­on eit­her the com­pe­tent data pro­tec­tion super­vi­so­ry aut­ho­ri­ties under Regu­la­ti­on 2016/679, or Direc­ti­ve (EU) 2016/680 or any other aut­ho­ri­ty desi­gna­ted pur­su­ant to the same con­di­ti­ons laid down in Artic­les 1 to 44 of Direc­ti­ve or Direc­ti­ve (EU) 2016/680. Mar­ket sur­veil­lan­ce acti­vi­ties shall in no way affect the inde­pen­dence of judi­cial aut­ho­ri­ties or other­wi­se inter­fe­re with their acti­vi­ties when acting in their judi­cial capa­ci­ty. 6. Whe­re Uni­on insti­tu­ti­ons, agen­ci­es and bodies fall within the scope of this Regu­la­ti­on, the Euro­pean Data Pro­tec­tion Super­vi­sor shall act as their mar­ket sur­veil­lan­ce aut­ho­ri­ty except in rela­ti­on to the Court of Justi­ce acting in its judi­cial capa­ci­ty. 7. Mem­ber Sta­tes shall faci­li­ta­te the coor­di­na­ti­on bet­ween mar­ket sur­veil­lan­ce aut­ho­ri­ties desi­gna­ted under this Regu­la­ti­on and other rele­vant natio­nal aut­ho­ri­ties or bodies which super­vi­se the appli­ca­ti­on of Uni­on har­mo­ni­sa­ti­on legis­la­ti­on listed in Annex II or other Uni­on legis­la­ti­on that might be rele­vant for the high-risk AI systems refer­red to in Annex III. 7a. Mar­ket sur­veil­lan­ce aut­ho­ri­ties and the Com­mis­si­on shall be able to pro­po­se joint acti­vi­ties, inclu­ding joint inve­sti­ga­ti­ons, to be con­duc­ted by eit­her mar­ket sur­veil­lan­ce aut­ho­ri­ties or mar­ket sur­veil­lan­ce aut­ho­ri­ties joint­ly with the Com­mis­si­on, that have the aim of pro­mo­ting com­pli­ance, iden­ti­fy­ing non-com­pli­ance, rai­sing awa­re­ness and pro­vi­ding gui­dance in rela­ti­on to this Regu­la­ti­on with respect to spe­ci­fic cate­go­ries of high-risk AI systems that are found to pre­sent a serious risk across seve­ral Mem­ber Sta­tes in accordance with Artic­le 9 of the 2019/1020. The AI Office shall pro­vi­de coor­di­na­ti­on sup­port for joint inve­sti­ga­ti­ons. 7a. Wit­hout pre­ju­di­ce to powers pro­vi­ded under Regu­la­ti­on (EU) 2019/1020, and whe­re rele­vant and limi­t­ed to what is neces­sa­ry to ful­fil their tasks, the mar­ket sur­veil­lan­ce aut­ho­ri­ties shall be gran­ted full access by the pro­vi­der to the docu­men­ta­ti­on as well as the trai­ning, vali­da­ti­on and test­ing data­sets used for the deve­lo­p­ment of the high-risk AI system, inclu­ding, whe­re appro­pria­te and sub­ject to secu­ri­ty safe­guards, through appli­ca­ti­on pro­gramming inter­faces (‘API’) or other rele­vant tech­ni­cal means and tools enab­ling remo­te access. 7b. Mar­ket sur­veil­lan­ce aut­ho­ri­ties shall be gran­ted access to the source code of the high-risk AI system upon a rea­so­ned request and only when the fol­lo­wing cumu­la­ti­ve con­di­ti­ons are ful­fil­led: (a) access to source code is neces­sa­ry to assess the con­for­mi­ty of a high-risk AI system with the requi­re­ments set out in Tit­le III, Chap­ter 2; and (b) testing/auditing pro­ce­du­res and veri­fi­ca­ti­ons based on the data and docu­men­ta­ti­on pro­vi­ded by the pro­vi­der have been exhau­sted or pro­ved insuf­fi­ci­ent. 7c. Any infor­ma­ti­on and docu­men­ta­ti­on obtai­ned by mar­ket sur­veil­lan­ce aut­ho­ri­ties shall be trea­ted in com­pli­ance with the con­fi­den­tia­li­ty obli­ga­ti­ons set out in Artic­le 70. 

Artic­le 63a – Mutu­al Assi­stance, mar­ket sur­veil­lan­ce and con­trol of gene­ral pur­po­se AI systems

1. Whe­re an AI system is based on a gene­ral pur­po­se AI model and the model and the system are deve­lo­ped by the same pro­vi­der, the AI office shall have powers to moni­tor and super­vi­se com­pli­ance of this AI system with the obli­ga­ti­ons of this Regu­la­ti­on. To car­ry moni­to­ring and super­vi­si­on tasks the AI Office shall have all the powers of a mar­ket sur­veil­lan­ce aut­ho­ri­ty within the mea­ning of the Regu­la­ti­on 2019/1020. 2. Whe­re the rele­vant mar­ket sur­veil­lan­ce aut­ho­ri­ties have suf­fi­ci­ent rea­sons to con­sider that gene­ral pur­po­se AI systems that can be used direct­ly by deployers for at least one pur­po­se that is clas­si­fi­ed as high-risk pur­su­ant to this Regu­la­ti­on, is non-com­pli­ant with the requi­re­ments laid down in this Regu­la­ti­on, it shall coope­ra­te with the AI Office to car­ry out eva­lua­ti­on of com­pli­ance and inform the Board and other mar­ket sur­veil­lan­ce aut­ho­ri­ties accor­din­gly. 3. When a natio­nal mar­ket sur­veil­lan­ce aut­ho­ri­ty is unable to con­clude its inve­sti­ga­ti­on on the high-risk AI system becau­se of its ina­bi­li­ty to access cer­tain infor­ma­ti­on rela­ted to the gene­ral pur­po­se AI model despi­te having made all appro­pria­te efforts to obtain that infor­ma­ti­on, it may sub­mit a rea­so­ned request to the AI Office whe­re access to this infor­ma­ti­on can be enforced. In this case the AI Office shall sup­p­ly to the appli­cant aut­ho­ri­ty wit­hout delay, and in any event within 30 days, any infor­ma­ti­on that the AI Office con­siders to be rele­vant in order to estab­lish whe­ther a high-risk AI system is non- com­pli­ant. Natio­nal mar­ket aut­ho­ri­ties shall safe­guard the con­fi­den­tia­li­ty of the infor­ma­ti­on they obtain in accordance with the Artic­le 70. The pro­ce­du­re pro­vi­ded in Chap­ter VI of the Regu­la­ti­on (EU) 1020/2019 shall app­ly by analogy. 

Artic­le 63b – Super­vi­si­on of test­ing in real world con­di­ti­ons by mar­ket sur­veil­lan­ce authorities

1. Mar­ket sur­veil­lan­ce aut­ho­ri­ties shall have the com­pe­tence and powers to ensu­re that test­ing in real world con­di­ti­ons is in accordance with this Regu­la­ti­on. 2. Whe­re test­ing in real world con­di­ti­ons is con­duc­ted for AI systems that are super­vi­sed within an AI regu­la­to­ry sand­box under Artic­le 54, the mar­ket sur­veil­lan­ce aut­ho­ri­ties shall veri­fy the com­pli­ance with the pro­vi­si­ons of Artic­le 54a as part of their super­vi­so­ry role for the AI regu­la­to­ry sand­box. Tho­se aut­ho­ri­ties may, as appro­pria­te, allow the test­ing in real world con­di­ti­ons to be con­duc­ted by the pro­vi­der or pro­s­pec­ti­ve pro­vi­der in dero­ga­ti­on to the con­di­ti­ons set out in Artic­le 54a(4) (f) and (g). 3. Whe­re a mar­ket sur­veil­lan­ce aut­ho­ri­ty has been infor­med by the pro­s­pec­ti­ve pro­vi­der, the pro­vi­der or any third par­ty of a serious inci­dent or has other grounds for con­side­ring that the con­di­ti­ons set out in Artic­les 54a and 54b are not met, it may take any of the fol­lo­wing decis­i­ons on its ter­ri­to­ry, as appro­pria­te: (a) sus­pend or ter­mi­na­te the test­ing in real world con­di­ti­ons; (b) requi­re the pro­vi­der or pro­s­pec­ti­ve pro­vi­der and user(s) to modi­fy any aspect of the test­ing in real world con­di­ti­ons. 4. Whe­re a mar­ket sur­veil­lan­ce aut­ho­ri­ty has taken a decis­i­on refer­red to in para­graph 3 of this Artic­le or has issued an objec­tion within the mea­ning of Artic­le 54a(4)(b), the decis­i­on or the objec­tion shall indi­ca­te the grounds the­reof and the moda­li­ties and con­di­ti­ons for the pro­vi­der or pro­s­pec­ti­ve pro­vi­der to chall­enge the decis­i­on or objec­tion. 5. Whe­re appli­ca­ble, whe­re a mar­ket sur­veil­lan­ce aut­ho­ri­ty has taken a decis­i­on refer­red to in para­graph 3 of this Artic­le, it shall com­mu­ni­ca­te the grounds the­r­e­for to the mar­ket sur­veil­lan­ce aut­ho­ri­ties of the other Mem­ber Sta­tes in which the AI system has been tested in accordance with the test­ing plan. 

Artic­le 64 – Powers of aut­ho­ri­ties pro­tec­ting fun­da­men­tal rights

3. Natio­nal public aut­ho­ri­ties or bodies which super­vi­se or enforce the respect of obli­ga­ti­ons under Uni­on law pro­tec­ting fun­da­men­tal rights, inclu­ding the right to non-dis­cri­mi­na­ti­on, in rela­ti­on to the use of high-risk AI systems refer­red to in Annex III shall have the power to request and access any docu­men­ta­ti­on crea­ted or main­tai­ned under this Regu­la­ti­on in acce­s­si­ble lan­guage and for­mat when access to that docu­men­ta­ti­on is neces­sa­ry for effec­tively ful­fil­ling their man­da­te within the limits of their juris­dic­tion. The rele­vant public aut­ho­ri­ty or body shall inform the mar­ket sur­veil­lan­ce aut­ho­ri­ty of the Mem­ber Sta­te con­cer­ned of any such request. 4. By three months after the ente­ring into force of this Regu­la­ti­on, each Mem­ber Sta­te shall iden­ti­fy the public aut­ho­ri­ties or bodies refer­red to in para­graph 3 and make a list publicly available. Mem­ber Sta­tes shall noti­fy the list to the Com­mis­si­on and all other Mem­ber Sta­tes and keep the list up to date. 5. Whe­re the docu­men­ta­ti­on refer­red to in para­graph 3 is insuf­fi­ci­ent to ascer­tain whe­ther a breach of obli­ga­ti­ons under Uni­on law inten­ded to pro­tect fun­da­men­tal rights has occur­red, the public aut­ho­ri­ty or body refer­red to in para­graph 3 may make a rea­so­ned request to the mar­ket sur­veil­lan­ce aut­ho­ri­ty, to orga­ni­se test­ing of the high-risk AI system through tech­ni­cal means. The mar­ket sur­veil­lan­ce aut­ho­ri­ty shall orga­ni­se the test­ing with the clo­se invol­vement of the reque­st­ing public aut­ho­ri­ty or body within rea­sonable time fol­lo­wing the request. 6. Any infor­ma­ti­on and docu­men­ta­ti­on obtai­ned by the natio­nal public aut­ho­ri­ties or bodies refer­red to in para­graph 3 pur­su­ant to the pro­vi­si­ons of this Artic­le shall be trea­ted in com­pli­ance with the con­fi­den­tia­li­ty obli­ga­ti­ons set out in Artic­le 70. 

Artic­le 65 – Pro­ce­du­re for deal­ing with AI systems pre­sen­ting a risk at natio­nal level

1. AI systems pre­sen­ting a risk shall be under­s­tood as a pro­duct pre­sen­ting a risk defi­ned in Artic­le 3, point 19 of Regu­la­ti­on (EU) 2019/1020 inso­far as risks to the health or safe­ty or to fun­da­men­tal rights of per­sons are con­cer­ned. 2. Whe­re the mar­ket sur­veil­lan­ce aut­ho­ri­ty of a Mem­ber Sta­te has suf­fi­ci­ent rea­sons to con­sider that an AI system pres­ents a risk as refer­red to in para­graph 1, it shall car­ry out an eva­lua­ti­on of the AI system con­cer­ned in respect of its com­pli­ance with all the requi­re­ments and obli­ga­ti­ons laid down in this Regu­la­ti­on. Par­ti­cu­lar atten­ti­on shall be given to AI systems pre­sen­ting a risk to vul­nerable groups (refer­red to in Artic­le 5). When risks to fun­da­men­tal rights are iden­ti­fi­ed, the mar­ket sur­veil­lan­ce aut­ho­ri­ty shall also inform and ful­ly coope­ra­te with the rele­vant natio­nal public aut­ho­ri­ties or bodies refer­red to in Artic­le 64(3). The rele­vant ope­ra­tors shall coope­ra­te as neces­sa­ry with the mar­ket sur­veil­lan­ce aut­ho­ri­ty and the other natio­nal public aut­ho­ri­ties or bodies refer­red to in Artic­le 64(3). Whe­re, in the cour­se of that eva­lua­ti­on, the mar­ket sur­veil­lan­ce aut­ho­ri­ty and whe­re appli­ca­ble in coope­ra­ti­on with the natio­nal public aut­ho­ri­ty refer­red to in Artic­le 64(3) finds that the AI system does not com­ply with the requi­re­ments and obli­ga­ti­ons laid down in this Regu­la­ti­on, it shall wit­hout undue delay requi­re the rele­vant ope­ra­tor to take all appro­pria­te cor­rec­ti­ve actions to bring the AI system into com­pli­ance, to with­draw the AI system from the mar­ket, or to recall it within a peri­od it may pre­scri­be and in any event no later than fif­teen working days or as pro­vi­ded for in the rele­vant Uni­on har­mo­ni­sa­ti­on law as appli­ca­ble The mar­ket sur­veil­lan­ce aut­ho­ri­ty shall inform the rele­vant noti­fi­ed body accor­din­gly. Artic­le 18 of Regu­la­ti­on (EU) 2019/1020 shall app­ly to the mea­su­res refer­red to in the second sub­pa­ra­graph. 3. Whe­re the mar­ket sur­veil­lan­ce aut­ho­ri­ty con­siders that non-com­pli­ance is not rest­ric­ted to its natio­nal ter­ri­to­ry, it shall inform the Com­mis­si­on, and the other Mem­ber Sta­tes wit­hout undue delay of the results of the eva­lua­ti­on and of the actions which it has requi­red the ope­ra­tor to take. 4. The ope­ra­tor shall ensu­re that all appro­pria­te cor­rec­ti­ve action is taken in respect of all the AI systems con­cer­ned that it has made available on the mar­ket throug­hout the Uni­on. 5. Whe­re the ope­ra­tor of an AI system does not take ade­qua­te cor­rec­ti­ve action within the peri­od refer­red to in para­graph 2, the mar­ket sur­veil­lan­ce aut­ho­ri­ty shall take all appro­pria­te pro­vi­sio­nal mea­su­res to pro­hi­bit or rest­rict the AI system’s being made available on its natio­nal mar­ket or put into ser­vice, to with­draw the pro­duct or the stan­da­lo­ne AI system from that mar­ket or to recall it. That aut­ho­ri­ty shall wit­hout undue delay noti­fy the Com­mis­si­on and the other Mem­ber Sta­tes of tho­se mea­su­res. 6. The noti­fi­ca­ti­on refer­red to in para­graph 5 shall include all available details, in par­ti­cu­lar the infor­ma­ti­on neces­sa­ry for the iden­ti­fi­ca­ti­on of the non-com­pli­ant AI system, the ori­gin of the AI system and the sup­p­ly chain, the natu­re of the non-com­pli­ance alle­ged and the risk invol­ved, the natu­re and dura­ti­on of the natio­nal mea­su­res taken and the argu­ments put for­ward by the rele­vant ope­ra­tor. In par­ti­cu­lar, the mar­ket sur­veil­lan­ce aut­ho­ri­ties shall indi­ca­te whe­ther the non-com­pli­ance is due to one or more of the fol­lo­wing: (-a) non-com­pli­ance with the pro­hi­bi­ti­on of the arti­fi­ci­al intel­li­gence prac­ti­ces refer­red to in Artic­le 5; (a) a fail­ure of a high-risk AI system to meet requi­re­ments set out in Tit­le III, Chap­ter 2; (b) short­co­mings in the har­mo­ni­s­ed stan­dards or com­mon spe­ci­fi­ca­ti­ons refer­red to in Artic­les 40 and 41 con­fer­ring a pre­sump­ti­on of con­for­mi­ty; (ba) non-com­pli­ance with pro­vi­si­ons set out in Artic­le 52. 7. The mar­ket sur­veil­lan­ce aut­ho­ri­ties of the Mem­ber Sta­tes other than the mar­ket sur­veil­lan­ce aut­ho­ri­ty of the Mem­ber Sta­te initia­ting the pro­ce­du­re shall wit­hout undue delay inform the Com­mis­si­on and the other Mem­ber Sta­tes of any mea­su­res adopted and of any addi­tio­nal infor­ma­ti­on at their dis­po­sal rela­ting to the non-com­pli­ance of the AI system con­cer­ned, and, in the event of dis­agree­ment with the noti­fi­ed natio­nal mea­su­re, of their objec­tions. 8. Whe­re, within three months of rece­ipt of the noti­fi­ca­ti­on refer­red to in para­graph 5, no objec­tion has been rai­sed by eit­her a mar­ket sur­veil­lan­ce aut­ho­ri­ty of a Mem­ber Sta­te or the Com­mis­si­on in respect of a pro­vi­sio­nal mea­su­re taken by a mar­ket sur­veil­lan­ce aut­ho­ri­ty of ano­ther Mem­ber Sta­te, that mea­su­re shall be dee­med justi­fi­ed. This is wit­hout pre­ju­di­ce to the pro­ce­du­ral rights of the con­cer­ned ope­ra­tor in accordance with Artic­le 18 of Regu­la­ti­on (EU) 2019/1020. The peri­od refer­red to in the first sen­tence of this para­graph shall be redu­ced to thir­ty days in the event of non-com­pli­ance with the pro­hi­bi­ti­on of the arti­fi­ci­al intel­li­gence prac­ti­ces refer­red to in Artic­le 5. 9. The mar­ket sur­veil­lan­ce aut­ho­ri­ties of all Mem­ber Sta­tes shall ensu­re that appro­pria­te rest­ric­ti­ve mea­su­res are taken in respect of the pro­duct or the AI system con­cer­ned, such as with­dra­wal of the pro­duct or the AI system from their mar­ket, wit­hout undue delay. 

Artic­le 65a – Pro­ce­du­re for deal­ing with AI systems clas­si­fi­ed by the pro­vi­der as a not high-risk in appli­ca­ti­on of Annex III

1. Whe­re a mar­ket sur­veil­lan­ce aut­ho­ri­ty has suf­fi­ci­ent rea­sons to con­sider that an AI system clas­si­fi­ed by the pro­vi­der as non-high-risk in appli­ca­ti­on of Annex III is high-risk, they mar­ket sur­veil­lan­ce aut­ho­ri­ty shall car­ry out an eva­lua­ti­on of the AI system con­cer­ned in respect of its clas­si­fi­ca­ti­on as a high-risk AI system based on the con­di­ti­ons set out in Annex III and the Com­mis­si­on gui­de­lines. 2. Whe­re, in the cour­se of that eva­lua­ti­on, the mar­ket sur­veil­lan­ce aut­ho­ri­ty finds that the AI system con­cer­ned is high-risk, it shall wit­hout undue delay requi­re the rele­vant pro­vi­der to take all neces­sa­ry actions to bring the AI system into com­pli­ance with the requi­re­ments and obli­ga­ti­ons laid down in this Regu­la­ti­on as well as take appro­pria­te cor­rec­ti­ve action within a peri­od it may pre­scri­be. 3. Whe­re the mar­ket sur­veil­lan­ce aut­ho­ri­ty con­siders that the use of the AI system con­cer­ned is not rest­ric­ted to its natio­nal ter­ri­to­ry, it shall inform the Com­mis­si­on and the other Mem­ber Sta­tes wit­hout undue delay of the results of the eva­lua­ti­on and of the actions which it has requi­red the pro­vi­der to take. 4. The pro­vi­der shall ensu­re that all neces­sa­ry action is taken to bring the AI system into com­pli­ance with the requi­re­ments and obli­ga­ti­ons laid down in this Regu­la­ti­on. Whe­re the pro­vi­der of an AI system con­cer­ned does not bring the AI system into com­pli­ance with the requi­re­ments and obli­ga­ti­ons of this Regu­la­ti­on within the peri­od refer­red to in para­graph 2, the pro­vi­der shall be sub­ject to fines in accordance with Artic­le 71. 5. The pro­vi­der shall ensu­re that all appro­pria­te cor­rec­ti­ve action is taken in respect of all the AI systems con­cer­ned that it has made available on the mar­ket throug­hout the Uni­on. 6. Whe­re the pro­vi­der of the AI system con­cer­ned does not take ade­qua­te cor­rec­ti­ve action within the peri­od refer­red to in para­graph 2, then the pro­vi­si­ons of Artic­le 65 para­graphs 5 to 9 app­ly. 7. Whe­re, in the cour­se of that eva­lua­ti­on pur­su­ant to para­graph 1, the mar­ket sur­veil­lan­ce aut­ho­ri­ty estab­lishes that the AI system was mis­clas­si­fi­ed by the pro­vi­der as not high-risk to cir­cum­vent the appli­ca­ti­on of requi­re­ments in Tit­le III, Chap­ter 2, the pro­vi­der shall be sub­ject to fines in accordance with Artic­le 71. 8. In exer­cis­ing their power to moni­tor the appli­ca­ti­on of this artic­le and in accordance with Artic­le 11 of Regu­la­ti­on (EU) 2019/1020, mar­ket sur­veil­lan­ce aut­ho­ri­ties may per­form appro­pria­te checks, taking into account in par­ti­cu­lar infor­ma­ti­on stored in the EU data­ba­se refer­red to in Artic­le 60. 

Artic­le 66 – Uni­on safe­guard procedure

1. Whe­re, within three months of rece­ipt of the noti­fi­ca­ti­on refer­red to in Artic­le 65(5), or 30 days in the case of non-com­pli­ance with the pro­hi­bi­ti­on of the arti­fi­ci­al intel­li­gence prac­ti­ces refer­red to in Artic­le 5, objec­tions are rai­sed by the mar­ket sur­veil­lan­ce aut­ho­ri­ty of a Mem­ber Sta­te against a mea­su­re taken by ano­ther mar­ket sur­veil­lan­ce aut­ho­ri­ty, or whe­re the Com­mis­si­on con­siders the mea­su­re to be con­tra­ry to Uni­on law, the Com­mis­si­on shall wit­hout undue delay enter into con­sul­ta­ti­on with the mar­ket sur­veil­lan­ce aut­ho­ri­ty of the rele­vant Mem­ber Sta­te and ope­ra­tor or ope­ra­tors and shall eva­lua­te the natio­nal mea­su­re. On the basis of the results of that eva­lua­ti­on, the Com­mis­si­on shall deci­de whe­ther the natio­nal mea­su­re is justi­fi­ed or not within six months, or 60 days in the case of non-com­pli­ance with the pro­hi­bi­ti­on of the arti­fi­ci­al intel­li­gence prac­ti­ces refer­red to in Artic­le 5, start­ing from the noti­fi­ca­ti­on refer­red to in Artic­le 65(5) and noti­fy such decis­i­on to the mar­ket sur­veil­lan­ce aut­ho­ri­ty of the Mem­ber Sta­te con­cer­ned. The Com­mis­si­on shall also inform all other mar­ket sur­veil­lan­ce aut­ho­ri­ties of such decis­i­on. 2. If the mea­su­re taken by the rele­vant Mem­ber Sta­tes is con­side­red justi­fi­ed by the Com­mis­si­on, all Mem­ber Sta­tes shall ensu­re that appro­pria­te rest­ric­ti­ve mea­su­res are taken in respect of the AI system con­cer­ned, such as with­dra­wal of the AI system from their mar­ket wit­hout undue delay, and shall inform the Com­mis­si­on accor­din­gly. If the natio­nal mea­su­re is con­side­red unju­sti­fi­ed by the Com­mis­si­on, the Mem­ber Sta­te con­cer­ned shall with­draw the mea­su­re and inform the Com­mis­si­on accor­din­gly. 3. Whe­re the natio­nal mea­su­re is con­side­red justi­fi­ed and the non-com­pli­ance of the AI system is attri­bu­ted to short­co­mings in the har­mo­ni­s­ed stan­dards or com­mon spe­ci­fi­ca­ti­ons refer­red to in Artic­les 40 and 41 of this Regu­la­ti­on, the Com­mis­si­on shall app­ly the pro­ce­du­re pro­vi­ded for in Artic­le 11 of Regu­la­ti­on (EU) No 1025/2012.

Artic­le 67 – Com­pli­ant AI systems which pre­sent a risk

1. Whe­re, having per­for­med an eva­lua­ti­on under Artic­le 65, after con­sul­ting the rele­vant natio­nal public aut­ho­ri­ty refer­red to in Artic­le 64(3), the mar­ket sur­veil­lan­ce aut­ho­ri­ty of a Mem­ber Sta­te finds that alt­hough a high-risk AI system is in com­pli­ance with this Regu­la­ti­on, it pres­ents a risk to the health or safe­ty of per­sons, fun­da­men­tal rights, or to other aspects of public inte­rest pro­tec­tion, it shall requi­re the rele­vant ope­ra­tor to take all appro­pria­te mea­su­res to ensu­re that the AI system con­cer­ned, when pla­ced on the mar­ket or put into ser­vice, no lon­ger pres­ents that risk wit­hout undue delay, within a peri­od it may pre­scri­be. 2. The pro­vi­der or other rele­vant ope­ra­tors shall ensu­re that cor­rec­ti­ve action is taken in respect of all the AI systems con­cer­ned that they have made available on the mar­ket throug­hout the Uni­on within the time­line pre­scri­bed by the mar­ket sur­veil­lan­ce aut­ho­ri­ty of the Mem­ber Sta­te refer­red to in para­graph 1. 3. The Mem­ber Sta­tes shall imme­dia­te­ly inform the Com­mis­si­on and the other Mem­ber Sta­tes. That infor­ma­ti­on shall include all available details, in par­ti­cu­lar the data neces­sa­ry for the iden­ti­fi­ca­ti­on of the AI system con­cer­ned, the ori­gin and the sup­p­ly chain of the AI system, the natu­re of the risk invol­ved and the natu­re and dura­ti­on of the natio­nal mea­su­res taken. 4. The Com­mis­si­on shall wit­hout undue delay enter into con­sul­ta­ti­on with the Mem­ber Sta­tes con­cer­ned and the rele­vant ope­ra­tor and shall eva­lua­te the natio­nal mea­su­res taken. On the basis of the results of that eva­lua­ti­on, the Com­mis­si­on shall deci­de whe­ther the mea­su­re is justi­fi­ed or not and, whe­re neces­sa­ry, pro­po­se appro­pria­te mea­su­res. 5. The Com­mis­si­on shall imme­dia­te­ly com­mu­ni­ca­te its decis­i­on to the Mem­ber Sta­tes con­cer­ned and to the rele­vant ope­ra­tors. It shall also inform of the decis­i­on all other Mem­ber States. 

Artic­le 68 – For­mal non-compliance

1. Whe­re the mar­ket sur­veil­lan­ce aut­ho­ri­ty of a Mem­ber Sta­te makes one of the fol­lo­wing fin­dings, it shall requi­re the rele­vant pro­vi­der to put an end to the non-com­pli­ance con­cer­ned, within a peri­od it may pre­scri­be: (a) the CE mar­king has been affi­xed in vio­la­ti­on of Artic­le 49; (b) the CE mar­king has not been affi­xed; (c) the EU decla­ra­ti­on of con­for­mi­ty has not been drawn up; (d) the EU decla­ra­ti­on of con­for­mi­ty has not been drawn up cor­rect­ly; (ea) the regi­stra­ti­on in the EU data­ba­se has not been car­ri­ed out; (eb) whe­re appli­ca­ble, the aut­ho­ri­sed repre­sen­ta­ti­ve has not been appoin­ted; (ec) the tech­ni­cal docu­men­ta­ti­on is not available. 2. Whe­re the non-com­pli­ance refer­red to in para­graph 1 per­sists, the mar­ket sur­veil­lan­ce aut­ho­ri­ty of the Mem­ber Sta­te con­cer­ned shall take appro­pria­te and pro­por­tio­na­te mea­su­res to rest­rict or pro­hi­bit the high-risk AI system being made available on the mar­ket or ensu­re that it is recal­led or with­drawn from the mar­ket wit­hout delay. 

Artic­le 68a – EU AI test­ing sup­port struc­tures in the area of arti­fi­ci­al intelligence

1. The Com­mis­si­on shall desi­gna­te one or more EU AI test­ing sup­port struc­tures to per­form the tasks listed under Artic­le 21(6) of Regu­la­ti­on (EU) 1020/2019 in the area of arti­fi­ci­al intel­li­gence. 2. Wit­hout pre­ju­di­ce to the tasks refer­red to in para­graph 1, EU AI test­ing sup­port struc­tu­re shall also pro­vi­de inde­pen­dent tech­ni­cal or sci­en­ti­fic advice at the request of the Board, the Com­mis­si­on, or mar­ket sur­veil­lan­ce authorities. 

Chap­ter 3b REMEDIES

Artic­le 68a – Right to lodge a com­plaint with a mar­ket sur­veil­lan­ce authority

1. Wit­hout pre­ju­di­ce to other admi­ni­stra­ti­ve or judi­cial reme­dies, com­plaints to the rele­vant mar­ket sur­veil­lan­ce aut­ho­ri­ty may be sub­mit­ted by any natu­ral or legal per­son having grounds to con­sider that the­re has been an inf­rin­ge­ment of the pro­vi­si­ons of this Regu­la­ti­on. 2. In accordance with Regu­la­ti­on (EU) 2019/1020, com­plaints shall be taken into account for the pur­po­se of con­duc­ting the mar­ket sur­veil­lan­ce acti­vi­ties and be hand­led in line with the dedi­ca­ted pro­ce­du­res estab­lished the­r­e­fo­re by the mar­ket sur­veil­lan­ce authorities 

Artic­le 68 c A right to expl­ana­ti­on of indi­vi­du­al decision-making

1. Any affec­ted per­son sub­ject to a decis­i­on which is taken by the deployer on the basis of the out­put from an high-risk AI system listed in Annex III, with the excep­ti­on of systems listed under point 2, and which pro­du­ces legal effects or simi­lar­ly signi­fi­cant­ly affects him or her in a way that they con­sider to adver­se­ly impact their health, safe­ty and fun­da­men­tal rights shall have the right to request from the deployer clear and meaningful expl­ana­ti­ons on the role of the AI system in the decis­i­on-making pro­ce­du­re and the main ele­ments of the decis­i­on taken. 2. Para­graph 1 shall not app­ly to the use of AI systems for which excep­ti­ons from, or rest­ric­tions to, the obli­ga­ti­on under para­graph 1 fol­low from Uni­on or natio­nal law in com­pli­ance with Uni­on law. 3. This Artic­le shall only app­ly to the ext­ent that the right refer­red to in para­graph 1 is not alre­a­dy pro­vi­ded for under Uni­on legislation. 

Artic­le 68d – Amend­ment to Direc­ti­ve (EU) 2020/1828

In Annex I to Direc­ti­ve (EU) 2020/1828 of the Euro­pean Par­lia­ment and of the Council30, the fol­lo­wing point is added: “(67a) Regu­la­ti­on xxxx/xxxx of the Euro­pean Par­lia­ment and of the Coun­cil [lay­ing down har­mo­ni­s­ed rules on arti­fi­ci­al intel­li­gence (Arti­fi­ci­al Intel­li­gence Act) and amen­ding cer­tain Uni­on legis­la­ti­ve acts (OJ L …)]”.

Artic­le 68 e Report­ing of brea­ches and pro­tec­tion of report­ing persons

Direc­ti­ve (EU) 2019/1937 of the Euro­pean Par­lia­ment and of the Coun­cil shall app­ly to the report­ing of brea­ches of this Regu­la­ti­on and the pro­tec­tion of per­sons report­ing such breaches. 

Chap­ter 3c SUPERVISION, INVESTIGATION, ENFORCEMENT AND MONITORING IN RESPECT OF PROVIDERS OF GENERAL PURPOSE AI MODELS

Artic­le 68f – Enforce­ment of obli­ga­ti­ons on pro­vi­ders of gene­ral pur­po­se AI models

1. The Com­mis­si­on shall have exclu­si­ve powers to super­vi­se and enforce Chapter/Title [gene­ral pur­po­se AI models] taking into account the pro­ce­du­ral gua­ran­tees by vir­tue of Artic­le 68m. The Com­mis­si­on shall ent­rust the imple­men­ta­ti­on of the­se tasks to the Euro­pean AI Office, wit­hout pre­ju­di­ce to the powers of orga­ni­sa­ti­on of the Com­mis­si­on and the divi­si­on of com­pe­ten­ces bet­ween Mem­ber Sta­tes and the Uni­on based on the Trea­ties. 2. Wit­hout pre­ju­di­ce to Artic­le 63a para­graph 3, mar­ket sur­veil­lan­ce aut­ho­ri­ties may request to the Com­mis­si­on to exer­cise the powers laid down in this Chap­ter, whe­re this is neces­sa­ry and pro­por­tio­na­te to assist with the ful­film­ent of their tasks under this Regulation. 

Artic­le 68g – Moni­to­ring actions

1. For the pur­po­ses of car­ry­ing out the tasks assi­gned to it under this Chap­ter, the AI Office may take the neces­sa­ry actions to moni­tor the effec­ti­ve imple­men­ta­ti­on and com­pli­ance with this Regu­la­ti­on by pro­vi­ders of gene­ral pur­po­se AI models, inclu­ding adherence to appro­ved codes of prac­ti­ce. 2. Down­stream pro­vi­ders shall have the right to lodge a com­plaint alleging an inf­rin­ge­ment of this Regu­la­ti­on. A com­plaint shall be duly rea­so­ned and at least indi­ca­te: (a) the point of cont­act of the pro­vi­der of the gene­ral pur­po­se AI model con­cer­ned; (b) descrip­ti­on of the rele­vant facts, the pro­vi­si­ons of this Regu­la­ti­on con­cer­ned and the rea­son why the down­stream pro­vi­der con­siders that the pro­vi­der of the gene­ral pur­po­se AI model con­cer­ned inf­rin­ged this Regu­la­ti­on; (c) any other infor­ma­ti­on that the down­stream pro­vi­der that sent the request con­siders rele­vant, inclu­ding, whe­re appro­pria­te, infor­ma­ti­on gathe­red on its own initiative. 

Artic­le 68h – Alerts of syste­mic risks by the sci­en­ti­fic panel

1. The sci­en­ti­fic panel may pro­vi­de a qua­li­fi­ed alert to the AI Office whe­re it has rea­son to suspect that (a) a gene­ral pur­po­se AI model poses con­cre­te iden­ti­fia­ble risk at Uni­on level; or (b) a gene­ral pur­po­se AI model meets the requi­re­ments refer­red to in Artic­le 52a [Clas­si­fi­ca­ti­on of gene­ral pur­po­se AI models with syste­mic risk]. 2. Upon such qua­li­fi­ed alert, the Com­mis­si­on, through the AI Office and after having infor­med the AI Board, may exer­cise the powers laid down in this Chap­ter for the pur­po­se of asses­sing the mat­ter. The AI Office shall inform the Board of any mea­su­re accor­ding to Artic­les 68i-68m. 3. A qua­li­fi­ed alert shall be duly rea­so­ned and at least indi­ca­te: (a) the point of cont­act of the pro­vi­der of the gene­ral pur­po­se AI model with syste­mic risk con­cer­ned; (b) a descrip­ti­on of the rele­vant facts and rea­sons for the sus­pi­ci­on of the sci­en­ti­fic panel; (c) any other infor­ma­ti­on that the sci­en­ti­fic panel con­siders rele­vant, inclu­ding, whe­re appro­pria­te, infor­ma­ti­on gathe­red on its own initiative. 

Artic­le 68i – Power to request docu­men­ta­ti­on and information

1. The Com­mis­si­on may request the pro­vi­der of the gene­ral pur­po­se AI model con­cer­ned to pro­vi­de the docu­men­ta­ti­on drawn up by the pro­vi­der accor­ding to Artic­le 52c [Obli­ga­ti­ons for pro­vi­ders of gene­ral pur­po­se AI models] and 52d [Obli­ga­ti­ons on pro­vi­ders of gene­ral pur­po­se AI models with syste­mic risk] or any addi­tio­nal infor­ma­ti­on that is neces­sa­ry for the pur­po­se of asses­sing com­pli­ance of the pro­vi­der with this Regu­la­ti­on. 2. Befo­re the request for infor­ma­ti­on is sent, the AI Office may initia­te a struc­tu­red dia­lo­gue with the pro­vi­der of the gene­ral pur­po­se AI model. 3. Upon a duly sub­stan­tia­ted request from the sci­en­ti­fic panel, the Com­mis­si­on may issue a request for infor­ma­ti­on to a pro­vi­der of a gene­ral pur­po­se AI model, whe­re the access to infor­ma­ti­on is neces­sa­ry and pro­por­tio­na­te for the ful­film­ent of the tasks of the sci­en­ti­fic panel accor­ding to Artic­le 58b [Sci­en­ti­fic panel](2). 4. The request for infor­ma­ti­on shall sta­te the legal basis and the pur­po­se of the request, spe­ci­fy­ing what infor­ma­ti­on is requi­red and set the peri­od within which the infor­ma­ti­on is to be pro­vi­ded, and the fines pro­vi­ded for in Artic­le 72a [fines] for sup­p­ly­ing incor­rect, incom­ple­te or mis­lea­ding infor­ma­ti­on. 5. The pro­vi­der of the gene­ral pur­po­se AI model con­cer­ned or their repre­sen­ta­ti­ves and, in the case of legal per­sons, com­pa­nies or firms, or whe­re they have no legal per­so­na­li­ty, the per­sons aut­ho­ri­sed to repre­sent them by law or by their con­sti­tu­ti­on shall sup­p­ly the infor­ma­ti­on reque­sted on behalf of the pro­vi­der of the gene­ral pur­po­se AI model con­cer­ned. Lawy­ers duly aut­ho­ri­sed to act may sup­p­ly the infor­ma­ti­on on behalf of their cli­ents. The lat­ter shall remain ful­ly respon­si­ble if the infor­ma­ti­on sup­plied is incom­ple­te, incor­rect or misleading. 

Artic­le 68j – Power to con­duct evaluations

1. The AI Office, after con­sul­ting the Board, may con­duct eva­lua­tions of the gene­ral pur­po­se AI model con­cer­ned (a) to assess com­pli­ance of the pro­vi­der with the obli­ga­ti­ons under this Regu­la­ti­on, whe­re the infor­ma­ti­on gathe­red pur­su­ant to Artic­le 68i [Power to request infor­ma­ti­on] is insuf­fi­ci­ent; or (b) to inve­sti­ga­te syste­mic risks at Uni­on level of gene­ral pur­po­se AI models with syste­mic risk, in par­ti­cu­lar fol­lo­wing a qua­li­fi­ed report from the sci­en­ti­fic panel in accordance with point (c) of Artic­le 68f [Enforce­ment of obli­ga­ti­ons on pro­vi­ders of gene­ral pur­po­se AI models and gene­ral pur­po­se AI models with syste­mic risk](3). 2. The Com­mis­si­on may deci­de to appoint inde­pen­dent experts to car­ry out eva­lua­tions on its behalf, inclu­ding from the sci­en­ti­fic panel pur­su­ant to Artic­le [sci­en­ti­fic panel of inde­pen­dent experts]. All inde­pen­dent experts appoin­ted for this task shall meet the cri­te­ria out­lined in Artic­le 58b, para­graph 2. 3. For the pur­po­se of para­graph 1, the Com­mis­si­on may request access to the gene­ral pur­po­se AI model con­cer­ned through appli­ca­ti­on pro­gramming inter­faces (‘API’) or fur­ther appro­pria­te tech­ni­cal means and tools, inclu­ding through source code. 4. The request for access shall sta­te the legal basis, the pur­po­se and rea­sons of the request and set the peri­od within which the access is to be pro­vi­ded, and the fines pro­vi­ded for in Artic­le 72a [fines] for fail­ure to pro­vi­de access. 5. The pro­vi­ders of the gene­ral pur­po­se AI model con­cer­ned and, in the case of legal per­sons, com­pa­nies or firms, or whe­re they have no legal per­so­na­li­ty, the per­sons aut­ho­ri­sed to repre­sent them by law or by their con­sti­tu­ti­on shall pro­vi­de the access reque­sted on behalf of the pro­vi­der of the gene­ral pur­po­se AI model con­cer­ned. 6. The moda­li­ties and the con­di­ti­ons of the eva­lua­tions, inclu­ding the moda­li­ties for invol­ving inde­pen­dent experts and the pro­ce­du­re for the sel­ec­tion of the lat­ter, shall be set out in imple­men­ting acts. Tho­se imple­men­ting acts shall be adopted in accordance with the exami­na­ti­on pro­ce­du­re refer­red to in Artic­le 74(2). 7. Pri­or to reque­st­ing access to the gene­ral pur­po­se AI model con­cer­ned, the AI Office may initia­te a struc­tu­red dia­lo­gue with the pro­vi­der of the gene­ral pur­po­se AI model to gather more infor­ma­ti­on on the inter­nal test­ing of the model, inter­nal safe­guards for pre­ven­ting syste­mic risks, and other inter­nal pro­ce­du­res and mea­su­res the pro­vi­der has taken to miti­ga­te such risks. 

Artic­le 68k – Power to request measures

1. Whe­re neces­sa­ry and appro­pria­te, the Com­mis­si­on may request pro­vi­ders to (a) take appro­pria­te mea­su­res to com­ply with the obli­ga­ti­ons set out in Tit­le VII­Ia, Chap­ter 2 [Obli­ga­ti­ons for pro­vi­der of gene­ral pur­po­se AI models]; (b) requi­re a pro­vi­der to imple­ment miti­ga­ti­on mea­su­res, whe­re the eva­lua­ti­on car­ri­ed out in accordance with Artic­le 68j [Power to con­duct eva­lua­tions] has given rise to serious and sub­stan­tia­ted con­cern of a syste­mic risk at Uni­on level; (c) rest­rict the making available on the mar­ket, with­draw or recall the model. 2. Befo­re a mea­su­re is reque­sted, the AI Office may initia­te a struc­tu­red dia­lo­gue with the pro­vi­der of the gene­ral pur­po­se AI model. 3. If, during the struc­tu­red dia­lo­gue under para­graph 2, the pro­vi­der of the gene­ral pur­po­se AI model with syste­mic risk offers com­mit­ments to imple­ment miti­ga­ti­on mea­su­res to address a syste­mic risk at Uni­on level, the Com­mis­si­on may by decis­i­on make the­se com­mit­ments bin­ding and decla­re that the­re are no fur­ther grounds for action. 

Artic­le 68 – Pro­ce­du­ral rights of eco­no­mic ope­ra­tors of the gene­ral pur­po­se AI model

Artic­le 18 of the Regu­la­ti­on (EU) 2019/1020 app­ly by ana­lo­gy to the pro­vi­ders of the gene­ral pur­po­se AI model wit­hout pre­ju­di­ce to more spe­ci­fic pro­ce­du­ral rights pro­vi­ded for in this Regulation. 

TITLE IX CODES OF CONDUCT

Artic­le 69 – Codes of con­duct for vol­un­t­a­ry appli­ca­ti­on of spe­ci­fic requirements

1. The AI Office, and the Mem­ber Sta­tes shall encou­ra­ge and faci­li­ta­te the dra­wing up of codes of con­duct, inclu­ding rela­ted gover­nan­ce mecha­nisms, inten­ded to foster the vol­un­t­a­ry appli­ca­ti­on to AI systems other than high-risk AI systems of some or all of the requi­re­ments set out in Tit­le III, Chap­ter 2 of this Regu­la­ti­on taking into account the available tech­ni­cal solu­ti­ons and indu­stry best prac­ti­ces allo­wing for the appli­ca­ti­on of such requi­re­ments. 2. The AI Office and the Mem­ber Sta­tes shall faci­li­ta­te the dra­wing up of codes of con­duct con­cer­ning the vol­un­t­a­ry appli­ca­ti­on, inclu­ding by deployers, of spe­ci­fic requi­re­ments to all AI systems, on the basis of clear objec­ti­ves and key per­for­mance indi­ca­tors to mea­su­re the achie­ve­ment of tho­se objec­ti­ves, inclu­ding ele­ments such as, but not limi­t­ed to: (a) appli­ca­ble ele­ments fore­seen in Euro­pean ethic gui­de­lines for trust­wor­t­hy AI; (b) asses­sing and mini­mi­zing the impact of AI systems on envi­ron­men­tal sus­taina­bi­li­ty, inclu­ding as regards ener­gy-effi­ci­ent pro­gramming and tech­ni­ques for effi­ci­ent design, trai­ning and use of AI; (c) pro­mo­ting AI liter­a­cy, in par­ti­cu­lar of per­sons deal­ing with the deve­lo­p­ment, ope­ra­ti­on and use of AI; (d) faci­li­ta­ting an inclu­si­ve and diver­se design of AI systems, inclu­ding through the estab­lish­ment of inclu­si­ve and diver­se deve­lo­p­ment teams and the pro­mo­ti­on of stake­hol­ders’ par­ti­ci­pa­ti­on in that pro­cess; (e) asses­sing and pre­ven­ting the nega­ti­ve impact of AI systems on vul­nerable per­sons or groups of per­sons, inclu­ding as regards acce­s­si­bi­li­ty for per­sons with a disa­bi­li­ty, as well as on gen­der equa­li­ty. 3. Codes of con­duct may be drawn up by indi­vi­du­al pro­vi­ders or deployers of AI systems or by orga­ni­sa­ti­ons repre­sen­ting them or by both, inclu­ding with the invol­vement of deployers and any inte­re­sted stake­hol­ders and their repre­sen­ta­ti­ve orga­ni­sa­ti­ons, inclu­ding civil socie­ty orga­ni­sa­ti­ons and aca­de­mia. Codes of con­duct may cover one or more AI systems taking into account the simi­la­ri­ty of the inten­ded pur­po­se of the rele­vant systems. 4. The AI Office, and the Mem­ber Sta­tes shall take into account the spe­ci­fic inte­rests and needs of SMEs, inclu­ding start-ups, when encou­ra­ging and faci­li­ta­ting the dra­wing up of codes of conduct. 

TITLE X CONFIDENTIALITY AND PENALTIES

Artic­le 70 – Confidentiality

1. The Com­mis­si­on, mar­ket sur­veil­lan­ce aut­ho­ri­ties and noti­fi­ed bodies and any other natu­ral or legal per­son invol­ved in the appli­ca­ti­on of this Regu­la­ti­on shall, in accordance with Uni­on or natio­nal law, respect the con­fi­den­tia­li­ty of infor­ma­ti­on and data obtai­ned in car­ry­ing out their tasks and acti­vi­ties in such a man­ner as to pro­tect, in par­ti­cu­lar: (a) intellec­tu­al pro­per­ty rights, and con­fi­den­ti­al busi­ness infor­ma­ti­on or trade secrets of a natu­ral or legal per­son, inclu­ding source code, except the cases refer­red to in Artic­le 5 of Direc­ti­ve 2016/943 on the pro­tec­tion of undis­c­lo­sed know-how and busi­ness infor­ma­ti­on (trade secrets) against their unlawful acqui­si­ti­on, use and dis­clo­sure app­ly; (b) the effec­ti­ve imple­men­ta­ti­on of this Regu­la­ti­on, in par­ti­cu­lar for the pur­po­se of inspec­tions, inve­sti­ga­ti­ons or audits; (ba) public and natio­nal secu­ri­ty inte­rests; (c) inte­gri­ty of cri­mi­nal or admi­ni­stra­ti­ve pro­ce­e­dings; (da) the inte­gri­ty of infor­ma­ti­on clas­si­fi­ed in accordance with Uni­on or natio­nal law. 1a. The aut­ho­ri­ties invol­ved in the appli­ca­ti­on of this Regu­la­ti­on pur­su­ant to para­graph 1 shall only request data that is strict­ly neces­sa­ry for the assess­ment of the risk posed by the AI system and for the exer­cise of their powers in com­pli­ance with this Regu­la­ti­on and Regu­la­ti­on 2019/1020. They shall put in place ade­qua­te and effec­ti­ve cyber­se­cu­ri­ty mea­su­res to pro­tect the secu­ri­ty and con­fi­den­tia­li­ty of the infor­ma­ti­on and data obtai­ned and shall dele­te the data coll­ec­ted as soon as it is no lon­ger nee­ded for the pur­po­se it was reque­sted for, in accordance with appli­ca­ble natio­nal or Euro­pean legis­la­ti­on. 2. Wit­hout pre­ju­di­ce to para­graph 1 and 1a, infor­ma­ti­on exch­an­ged on a con­fi­den­ti­al basis bet­ween the natio­nal com­pe­tent aut­ho­ri­ties and bet­ween natio­nal com­pe­tent aut­ho­ri­ties and the Com­mis­si­on shall not be dis­c­lo­sed wit­hout the pri­or con­sul­ta­ti­on of the ori­gi­na­ting natio­nal com­pe­tent aut­ho­ri­ty and the deployer when high-risk AI systems refer­red to in points 1, 6 and 7 of Annex III are used by law enforce­ment, bor­der con­trol, immi­gra­ti­on or asyl­um aut­ho­ri­ties, when such dis­clo­sure would jeo­par­di­se public and natio­nal secu­ri­ty inte­rests. This exch­an­ge of infor­ma­ti­on shall not cover sen­si­ti­ve ope­ra­tio­nal data in rela­ti­on to the acti­vi­ties of law enforce­ment, bor­der con­trol, immi­gra­ti­on or asyl­um aut­ho­ri­ties. When the law enforce­ment, immi­gra­ti­on or asyl­um aut­ho­ri­ties are pro­vi­ders of high-risk AI systems refer­red to in points 1, 6 and 7 of Annex III, the tech­ni­cal docu­men­ta­ti­on refer­red to in Annex IV shall remain within the pre­mi­ses of tho­se aut­ho­ri­ties. Tho­se aut­ho­ri­ties shall ensu­re that the mar­ket sur­veil­lan­ce aut­ho­ri­ties refer­red to in Artic­le 63(5) and (6), as appli­ca­ble, can, upon request, imme­dia­te­ly access the docu­men­ta­ti­on or obtain a copy the­reof. Only staff of the mar­ket sur­veil­lan­ce aut­ho­ri­ty hol­ding the appro­pria­te level of secu­ri­ty cle­ar­ance shall be allo­wed to access that docu­men­ta­ti­on or any copy the­reof. 3. Para­graphs 1, [1a] and 2 shall not affect the rights and obli­ga­ti­ons of the Com­mis­si­on, Mem­ber Sta­tes and their rele­vant aut­ho­ri­ties, as well as noti­fi­ed bodies, with regard to the exch­an­ge of infor­ma­ti­on and the dis­se­mi­na­ti­on of war­nings, inclu­ding in the con­text of cross-bor­der coope­ra­ti­on, nor the obli­ga­ti­ons of the par­ties con­cer­ned to pro­vi­de infor­ma­ti­on under cri­mi­nal law of the Mem­ber Sta­tes. 4. The Com­mis­si­on and Mem­ber Sta­tes may exch­an­ge, whe­re neces­sa­ry and in accordance with rele­vant pro­vi­si­ons of inter­na­tio­nal and trade agree­ments, con­fi­den­ti­al infor­ma­ti­on with regu­la­to­ry aut­ho­ri­ties of third count­ries with which they have con­clu­ded bila­te­ral or mul­ti­la­te­ral con­fi­den­tia­li­ty arran­ge­ments gua­ran­te­e­ing an ade­qua­te level of confidentiality. 

Artic­le 71 – Penalties

1. In com­pli­ance with the terms and con­di­ti­ons laid down in this Regu­la­ti­on, Mem­ber Sta­tes shall lay down the rules on pen­al­ties and other enforce­ment mea­su­res, which may also include war­nings and non-mone­ta­ry mea­su­res, appli­ca­ble to inf­rin­ge­ments of this Regu­la­ti­on by ope­ra­tors, and shall take all mea­su­res neces­sa­ry to ensu­re that they are pro­per­ly and effec­tively imple­men­ted and taking into account the gui­de­lines issued by the Com­mis­si­on pur­su­ant to Artic­le 82b. The pen­al­ties pro­vi­ded for shall be effec­ti­ve, pro­por­tio­na­te, and dissua­si­ve. They shall take into account the inte­rests of SMEs inclu­ding start-ups and their eco­no­mic via­bi­li­ty. 2. The Mem­ber Sta­tes shall wit­hout delay noti­fy the Com­mis­si­on and at the latest by the date of ent­ry into appli­ca­ti­on of tho­se respec­ti­ve rules and of tho­se respec­ti­ve mea­su­res and shall noti­fy them, wit­hout delay, of any sub­se­quent amend­ment affec­ting them. 3. Non-com­pli­ance with the pro­hi­bi­ti­on of the arti­fi­ci­al intel­li­gence prac­ti­ces refer­red to in Artic­le 5 shall be sub­ject to admi­ni­stra­ti­ve fines of up to 35 000 000 EUR or, if the offen­der is a com­pa­ny, up to 7 % of its total world­wi­de annu­al tur­no­ver for the pre­ce­ding finan­cial year, whi­che­ver is hig­her. 4. Non-com­pli­ance of an AI system with any of the fol­lo­wing pro­vi­si­ons rela­ted to ope­ra­tors or noti­fi­ed bodies, other than tho­se laid down in Artic­les 5, shall be sub­ject to admi­ni­stra­ti­ve fines of up to 15 000 000 EUR or, if the offen­der is a com­pa­ny, up to 3% of its total world­wi­de annu­al tur­no­ver for the pre­ce­ding finan­cial year, whi­che­ver is hig­her: (b) obli­ga­ti­ons of pro­vi­ders pur­su­ant to Artic­le 16; (d) obli­ga­ti­ons of aut­ho­ri­sed repre­sen­ta­ti­ves pur­su­ant to Artic­le 25; (e) obli­ga­ti­ons of importers pur­su­ant to Artic­le 26; (f) obli­ga­ti­ons of dis­tri­bu­tors pur­su­ant to Artic­le 27; (g) obli­ga­ti­ons of deployers pur­su­ant to Artic­le 29, para­graphs 1 to 6a; (h) requi­re­ments and obli­ga­ti­ons of noti­fi­ed bodies pur­su­ant to Artic­le 33, 34(1), 34(3), 34(4), 34a; (i) trans­pa­ren­cy obli­ga­ti­ons for pro­vi­ders and users pur­su­ant to Artic­le 52. 5. The sup­p­ly of incor­rect, incom­ple­te or mis­lea­ding infor­ma­ti­on to noti­fi­ed bodies and natio­nal com­pe­tent aut­ho­ri­ties in rep­ly to a request shall be sub­ject to admi­ni­stra­ti­ve fines of up to 7 500 000 EUR or, if the offen­der is a com­pa­ny, up to 1 % of its total world­wi­de annu­al tur­no­ver for the pre­ce­ding finan­cial year, whi­che­ver is hig­her. 5a. In case of SMEs, inclu­ding start-ups, each fine refer­red to in this Artic­le shall be up to the per­cen­ta­ges or amount refer­red to para­graphs 3, 4 and 5, whi­che­ver of the two is lower. 6. When deci­ding whe­ther to impo­se an admi­ni­stra­ti­ve fine and on the amount of the admi­ni­stra­ti­ve fine in each indi­vi­du­al case, all rele­vant cir­cum­stances of the spe­ci­fic situa­ti­on shall be taken into account and, as appro­pria­te, regard shall be given to the fol­lo­wing: (a) the natu­re, gra­vi­ty and dura­ti­on of the inf­rin­ge­ment and of its con­se­quen­ces, taking into account the pur­po­se of the AI system, as well as, whe­re appro­pria­te, the num­ber of affec­ted per­sons and the level of dama­ge suf­fe­r­ed by them; (b) whe­ther admi­ni­stra­ti­ve fines have been alre­a­dy applied by other mar­ket sur­veil­lan­ce aut­ho­ri­ties of one or more Mem­ber Sta­tes to the same ope­ra­tor for the same inf­rin­ge­ment; (ba) whe­ther admi­ni­stra­ti­ve fines have been alre­a­dy applied by other aut­ho­ri­ties to the same ope­ra­tor for inf­rin­ge­ments of other Uni­on or natio­nal law, when such inf­rin­ge­ments result from the same acti­vi­ty or omis­si­on con­sti­tu­ting a rele­vant inf­rin­ge­ment of this Act; (c) the size, the annu­al tur­no­ver and mar­ket share of the ope­ra­tor com­mit­ting the inf­rin­ge­ment; (ca) any other aggravating or miti­ga­ting fac­tor appli­ca­ble to the cir­cum­stances of the case, such as finan­cial bene­fits gai­ned, or los­ses avo­ided, direct­ly or indi­rect­ly, from the inf­rin­ge­ment; (ca) the degree of coope­ra­ti­on with the natio­nal com­pe­tent aut­ho­ri­ties, in order to reme­dy the inf­rin­ge­ment and miti­ga­te the pos­si­ble adver­se effects of the inf­rin­ge­ment; (cb) the degree of respon­si­bi­li­ty of the ope­ra­tor taking into account the tech­ni­cal and orga­ni­sa­tio­nal mea­su­res imple­men­ted by them; (ce) the man­ner in which the inf­rin­ge­ment beca­me known to the natio­nal com­pe­tent aut­ho­ri­ties, in par­ti­cu­lar whe­ther, and if so to what ext­ent, the ope­ra­tor noti­fi­ed the inf­rin­ge­ment; (cf) the inten­tio­nal or negli­gent cha­rac­ter of the inf­rin­ge­ment; (cg) any action taken by the ope­ra­tor to miti­ga­te the harm of dama­ge suf­fe­r­ed by the affec­ted per­sons. 7. Each Mem­ber Sta­te shall lay down rules on to what ext­ent admi­ni­stra­ti­ve fines may be impo­sed on public aut­ho­ri­ties and bodies estab­lished in that Mem­ber Sta­te. 8. Depen­ding on the legal system of the Mem­ber Sta­tes, the rules on admi­ni­stra­ti­ve fines may be applied in such a man­ner that the fines are impo­sed by com­pe­tent natio­nal courts or other bodies as appli­ca­ble in tho­se Mem­ber Sta­tes. The appli­ca­ti­on of such rules in tho­se Mem­ber Sta­tes shall have an equi­va­lent effect. 8a. The exer­cise by the mar­ket sur­veil­lan­ce aut­ho­ri­ty of its powers under this Artic­le shall be sub­ject to appro­pria­te pro­ce­du­ral safe­guards in accordance with Uni­on and Mem­ber Sta­te law, inclu­ding effec­ti­ve judi­cial reme­dy and due pro­cess. 8b. Mem­ber Sta­tes shall, on an annu­al basis, report to the Com­mis­si­on about the admi­ni­stra­ti­ve fines they have issued during that year, in accordance with this Artic­le, and any rela­ted liti­ga­ti­on or judi­cial proceedings; 

Artic­le 72 – Admi­ni­stra­ti­ve fines on Uni­on insti­tu­ti­ons, agen­ci­es and bodies

1. The Euro­pean Data Pro­tec­tion Super­vi­sor may impo­se admi­ni­stra­ti­ve fines on Uni­on insti­tu­ti­ons, agen­ci­es and bodies fal­ling within the scope of this Regu­la­ti­on. When deci­ding whe­ther to impo­se an admi­ni­stra­ti­ve fine and deci­ding on the amount of the admi­ni­stra­ti­ve fine in each indi­vi­du­al case, all rele­vant cir­cum­stances of the spe­ci­fic situa­ti­on shall be taken into account and due regard shall be given to the fol­lo­wing: (a) the natu­re, gra­vi­ty and dura­ti­on of the inf­rin­ge­ment and of its con­se­quen­ces, taking into account the pur­po­se of the AI system con­cer­ned as well as the num­ber of affec­ted per­sons and the level of dama­ge suf­fe­r­ed by them, and any rele­vant pre­vious inf­rin­ge­ment; (aa) the degree of respon­si­bi­li­ty of the Uni­on insti­tu­ti­on, agen­cy or body, taking into account tech­ni­cal and orga­ni­sa­tio­nal mea­su­res imple­men­ted by them; (ab) any action taken by the Uni­on insti­tu­ti­on, agen­cy or body to miti­ga­te the dama­ge suf­fe­r­ed by affec­ted per­sons; (b) the degree of coope­ra­ti­on with the Euro­pean Data Pro­tec­tion Super­vi­sor in order to reme­dy the inf­rin­ge­ment and miti­ga­te the pos­si­ble adver­se effects of the inf­rin­ge­ment, inclu­ding com­pli­ance with any of the mea­su­res pre­vious­ly orde­red by the Euro­pean Data Pro­tec­tion Super­vi­sor against the Uni­on insti­tu­ti­on or agen­cy or body con­cer­ned with regard to the same sub­ject mat­ter; (c) any simi­lar pre­vious inf­rin­ge­ments by the Uni­on insti­tu­ti­on, agen­cy or body; (ca) the man­ner in which the inf­rin­ge­ment beca­me known to the Euro­pean Data Pro­tec­tion Super­vi­sor, in par­ti­cu­lar whe­ther, and if so to what ext­ent, the Uni­on insti­tu­ti­on or body noti­fi­ed the inf­rin­ge­ment; (cb) the annu­al bud­get of the body. 2. Non-com­pli­ance with the pro­hi­bi­ti­on of the arti­fi­ci­al intel­li­gence prac­ti­ces refer­red to in Artic­le 5 shall be sub­ject to admi­ni­stra­ti­ve fines of up to EUR 1 500 000. 3. Non-com­pli­ance of the AI system with any requi­re­ments or obli­ga­ti­ons under this Regu­la­ti­on, other than tho­se laid down in Artic­les 5, shall be sub­ject to admi­ni­stra­ti­ve fines of up to EUR 750 000. 4. Befo­re taking decis­i­ons pur­su­ant to this Artic­le, the Euro­pean Data Pro­tec­tion Super­vi­sor shall give the Uni­on insti­tu­ti­on, agen­cy or body which is the sub­ject of the pro­ce­e­dings con­duc­ted by the Euro­pean Data Pro­tec­tion Super­vi­sor the oppor­tu­ni­ty of being heard on the mat­ter regar­ding the pos­si­ble inf­rin­ge­ment. The Euro­pean Data Pro­tec­tion Super­vi­sor shall base his or her decis­i­ons only on ele­ments and cir­cum­stances on which the par­ties con­cer­ned have been able to com­ment. Com­plainants, if any, shall be asso­cia­ted clo­se­ly with the pro­ce­e­dings. 5. The rights of defence of the par­ties con­cer­ned shall be ful­ly respec­ted in the pro­ce­e­dings. They shall be entit­led to have access to the Euro­pean Data Pro­tec­tion Supervisor’s file, sub­ject to the legi­ti­ma­te inte­rest of indi­vi­du­als or under­ta­kings in the pro­tec­tion of their per­so­nal data or busi­ness secrets. 6. Funds coll­ec­ted by impo­si­ti­on of fines in this Artic­le shall con­tri­bu­te to the gene­ral bud­get of the Uni­on. The fines shall not affect the effec­ti­ve ope­ra­ti­on of the Uni­on insti­tu­ti­on, body or agen­cy fined. 6a. The Euro­pean Data Pro­tec­tion Super­vi­sor shall, on an annu­al basis, noti­fy the Com­mis­si­on of the admi­ni­stra­ti­ve fines it has impo­sed pur­su­ant to this Artic­le and any liti­ga­ti­on or judi­cial proceedings. 

Artic­le 72a – Fines for pro­vi­ders of gene­ral pur­po­se AI models

1. The Com­mis­si­on may impo­se on pro­vi­ders of gene­ral pur­po­se AI models fines not exce­e­ding 3% of its total world­wi­de tur­no­ver in the pre­ce­ding finan­cial year or 15 mil­li­on EUR whi­che­ver is hig­her. Fines should be impo­sed one year after the ent­ry into appli­ca­ti­on of the rele­vant pro­vi­si­ons in this Regu­la­ti­on in order to allow pro­vi­ders suf­fi­ci­ent time to adapt when the Com­mis­si­on finds that the pro­vi­der inten­tio­nal­ly or negli­gent­ly: (a) inf­rin­ges the rele­vant pro­vi­si­ons of this Regu­la­ti­on; (b) fails to com­ply with a request for docu­ment or infor­ma­ti­on pur­su­ant to Artic­le 68i [Power to request docu­men­ta­ti­on and infor­ma­ti­on], or sup­p­ly of incor­rect, incom­ple­te or mis­lea­ding infor­ma­ti­on; (b) fails to com­ply with a mea­su­re reque­sted under Artic­le 68k [Power to request mea­su­res]; (c) fails to make available to the Com­mis­si­on access to the gene­ral pur­po­se AI model or gene­ral pur­po­se AI model with syste­mic risk with a view to con­duct an eva­lua­ti­on pur­su­ant to Artic­le 68j [Power to con­duct eva­lua­tions]. In fixing the amount of the fine or peri­odic penal­ty payment, regard shall be had to the natu­re, gra­vi­ty and dura­ti­on of the inf­rin­ge­ment, taking due account of the prin­ci­ples of pro­por­tio­na­li­ty and appro­pria­ten­ess. The Com­mis­si­on shall also into account com­mit­ments made in accordance with Artic­le 68k(3) or in rele­vant codes of prac­ti­ce in accordance with Artic­le 52e [Codes of prac­ti­ce]. 2. Befo­re adop­ting the decis­i­on pur­su­ant to para­graph 1 of this Artic­le, the Com­mis­si­on shall com­mu­ni­ca­te its preli­mi­na­ry fin­dings to the pro­vi­der of the gene­ral pur­po­se AI model or gene­ral pur­po­se AI model with syste­mic risk and give oppor­tu­ni­ty to be heard. 2a. Fines impo­sed in accordance with this artic­le shall be pro­por­tio­na­te, dissua­si­ve and effec­ti­ve. 2b. The infor­ma­ti­on on the fines shall be also com­mu­ni­ca­ted to the Board as appro­pria­te. 3. The Court of Justi­ce of the Euro­pean Uni­on shall have unli­mi­t­ed juris­dic­tion to review decis­i­ons wher­eby the Com­mis­si­on has fixed a fine. It may can­cel, redu­ce or increa­se the fine impo­sed. 4. The Com­mis­si­on shall adopt imple­men­ting acts con­cer­ning the moda­li­ties and prac­ti­cal arran­ge­ments for the pro­ce­e­dings in view of pos­si­ble adop­ti­ons of decis­i­ons pur­su­ant to para­graph 1. Tho­se imple­men­ting acts shall be adopted in accordance with the exami­na­ti­on pro­ce­du­re refer­red to in Artic­le 74(2).

TITLE XI DELEGATION OF POWER AND COMMITTEE PROCEDURE

Artic­le 73 – Exer­cise of the delegation

1. The power to adopt dele­ga­ted acts is con­fer­red on the Com­mis­si­on sub­ject to the con­di­ti­ons laid down in this Artic­le. 2. The power to adopt dele­ga­ted acts refer­red to in [Artic­le 4, Artic­le 7(1), Artic­le 11(3), Artic­le 43(5) and (6) and Artic­le 48(5)] shall be con­fer­red on the Com­mis­si­on for a peri­od of five years from … [the date of ent­ry into force of the Regulation].The Com­mis­si­on shall draw up a report in respect of the dele­ga­ti­on of power not later than 9 months befo­re the end of the five-year peri­od. The dele­ga­ti­on of power shall be taci­t­ly exten­ded for peri­ods of an iden­ti­cal dura­ti­on, unless the Euro­pean Par­lia­ment or the Coun­cil oppo­ses such exten­si­on not later than three months befo­re the end of each peri­od. 3. The dele­ga­ti­on of power refer­red to in {Artic­le 7(1), Artic­le 7(3), Artic­le 11(3), Artic­le 43(5) and (6) and Artic­le 48(5)] may be revo­ked at any time by the Euro­pean Par­lia­ment or by the Coun­cil. A decis­i­on of revo­ca­ti­on shall put an end to the dele­ga­ti­on of power spe­ci­fi­ed in that decis­i­on. It shall take effect the day fol­lo­wing that of its publi­ca­ti­on in the Offi­ci­al Jour­nal of the Euro­pean Uni­on or at a later date spe­ci­fi­ed the­r­ein. It shall not affect the vali­di­ty of any dele­ga­ted acts alre­a­dy in force. 4. As soon as it adopts a dele­ga­ted act, the Com­mis­si­on shall noti­fy it simul­ta­neous­ly to the Euro­pean Par­lia­ment and to the Coun­cil. 5. Any dele­ga­ted act adopted pur­su­ant to [Artic­le 4], Artic­le 7(1), Artic­le 11(3), Artic­le 43(5) and (6) and Artic­le 48(5) shall enter into force only if no objec­tion has been expres­sed by eit­her the Euro­pean Par­lia­ment or the Coun­cil within a peri­od of three months of noti­fi­ca­ti­on of that act to the Euro­pean Par­lia­ment and the Coun­cil or if, befo­re the expiry of that peri­od, the Euro­pean Par­lia­ment and the Coun­cil have both infor­med the Com­mis­si­on that they will not object. That peri­od shall be exten­ded by three months at the initia­ti­ve of the Euro­pean Par­lia­ment or of the Council. 

Artic­le 74 – Com­mit­tee procedure

1. The Com­mis­si­on shall be assi­sted by a com­mit­tee. That com­mit­tee shall be a com­mit­tee within the mea­ning of Regu­la­ti­on (EU) No 182/2011. 2. Whe­re refe­rence is made to this para­graph, Artic­le 5 of Regu­la­ti­on (EU) No 182/2011 shall apply. 

TITLE XII FINAL PROVISIONS

Artic­le 75 – Amend­ment to Regu­la­ti­on (EC) No 300/2008

In Artic­le 4(3) of Regu­la­ti­on (EC) No 300/2008, the fol­lo­wing sub­pa­ra­graph is added: “ When adop­ting detail­ed mea­su­res rela­ted to tech­ni­cal spe­ci­fi­ca­ti­ons and pro­ce­du­res for appr­oval and use of secu­ri­ty equip­ment con­cer­ning Arti­fi­ci­al Intel­li­gence systems in the mea­ning of Regu­la­ti­on (EU) YYY/XX [on Arti­fi­ci­al Intel­li­gence] of the Euro­pean Par­lia­ment and of the Coun­cil*, the requi­re­ments set out in Chap­ter 2, Tit­le III of that Regu­la­ti­on shall be taken into account.” * Regu­la­ti­on (EU) YYY/XX [on Arti­fi­ci­al Intel­li­gence] (OJ …).”“

Artic­le 76 – Amend­ment to Regu­la­ti­on (EU) No 167/2013

In Artic­le 17(5) of Regu­la­ti­on (EU) No 167/2013, the fol­lo­wing sub­pa­ra­graph is added: “When adop­ting dele­ga­ted acts pur­su­ant to the first sub­pa­ra­graph con­cer­ning arti­fi­ci­al intel­li­gence systems which are safe­ty com­pon­ents in the mea­ning of Regu­la­ti­on (EU) YYY/XX [on Arti­fi­ci­al Intel­li­gence] of the Euro­pean Par­lia­ment and of the Coun­cil*, the requi­re­ments set out in Tit­le III, Chap­ter 2 of that Regu­la­ti­on shall be taken into account. * Regu­la­ti­on (EU) YYY/XX [on Arti­fi­ci­al Intel­li­gence] (OJ …).”

Artic­le 77 – Amend­ment to Regu­la­ti­on (EU) No 168/2013

In Artic­le 22(5) of Regu­la­ti­on (EU) No 168/2013, the fol­lo­wing sub­pa­ra­graph is added: “When adop­ting dele­ga­ted acts pur­su­ant to the first sub­pa­ra­graph con­cer­ning Arti­fi­ci­al Intel­li­gence systems which are safe­ty com­pon­ents in the mea­ning of Regu­la­ti­on (EU) YYY/XX on [Arti­fi­ci­al Intel­li­gence] of the Euro­pean Par­lia­ment and of the Coun­cil*, the requi­re­ments set out in Tit­le III, Chap­ter 2 of that Regu­la­ti­on shall be taken into account. * Regu­la­ti­on (EU) YYY/XX [on Arti­fi­ci­al Intel­li­gence] (OJ …).”

Artic­le 78 – Amend­ment to Direc­ti­ve 2014/90/EU

In Artic­le 8 of Direc­ti­ve 2014/90/EU, the fol­lo­wing para­graph is added: “4. “For Arti­fi­ci­al Intel­li­gence systems which are safe­ty com­pon­ents in the mea­ning of Regu­la­ti­on (EU) YYY/XX [on Arti­fi­ci­al Intel­li­gence] of the Euro­pean Par­lia­ment and of the Coun­cil*, when car­ry­ing out its acti­vi­ties pur­su­ant to para­graph 1 and when adop­ting tech­ni­cal spe­ci­fi­ca­ti­ons and test­ing stan­dards in accordance with para­graphs 2 and 3, the Com­mis­si­on shall take into account the requi­re­ments set out in Tit­le III, Chap­ter 2 of that Regu­la­ti­on. * Regu­la­ti­on (EU) YYY/XX [on Arti­fi­ci­al Intel­li­gence] (OJ …).”

Artic­le 79 – Amend­ment to Direc­ti­ve (EU) 2016/797

In Artic­le 5 of Direc­ti­ve (EU) 2016/797, the fol­lo­wing para­graph is added: “12. “When adop­ting dele­ga­ted acts pur­su­ant to para­graph 1 and imple­men­ting acts pur­su­ant to para­graph 11 con­cer­ning Arti­fi­ci­al Intel­li­gence systems which are safe­ty com­pon­ents in the mea­ning of Regu­la­ti­on (EU) YYY/XX [on Arti­fi­ci­al Intel­li­gence] of the Euro­pean Par­lia­ment and of the Coun­cil*, the requi­re­ments set out in Tit­le III, Chap­ter 2 of that Regu­la­ti­on shall be taken into account. * Regu­la­ti­on (EU) YYY/XX [on Arti­fi­ci­al Intel­li­gence] (OJ …).”.

Artic­le 80 – Amend­ment to Regu­la­ti­on (EU) 2018/858

In Artic­le 5 of Regu­la­ti­on (EU) 2018/858 the fol­lo­wing para­graph is added: “4. “When adop­ting dele­ga­ted acts pur­su­ant to para­graph 3 con­cer­ning Arti­fi­ci­al Intel­li­gence systems which are safe­ty com­pon­ents in the mea­ning of Regu­la­ti­on (EU) YYY/XX [on Arti­fi­ci­al Intel­li­gence] of the Euro­pean Par­lia­ment and of the Coun­cil *, the requi­re­ments set out in Tit­le III, Chap­ter 2 of that Regu­la­ti­on shall be taken into account. * Regu­la­ti­on (EU) YYY/XX [on Arti­fi­ci­al Intel­li­gence] (OJ …).”. ”

Artic­le 81 – Amend­ment to Regu­la­ti­on (EU) 2018/1139

Regu­la­ti­on (EU) 2018/1139 is amen­ded as fol­lows: (1)In Artic­le 17, the fol­lo­wing para­graph is added: “3. “Wit­hout pre­ju­di­ce to para­graph 2, when adop­ting imple­men­ting acts pur­su­ant to para­graph 1 con­cer­ning Arti­fi­ci­al Intel­li­gence systems which are safe­ty com­pon­ents in the mea­ning of Regu­la­ti­on (EU) YYY/XX [on Arti­fi­ci­al Intel­li­gence] of the Euro­pean Par­lia­ment and of the Coun­cil*, the requi­re­ments set out in Tit­le III, Chap­ter 2 of that Regu­la­ti­on shall be taken into account. * Regu­la­ti­on (EU) YYY/XX [on Arti­fi­ci­al Intel­li­gence] (OJ …).” ” (2)In Artic­le 19, the fol­lo­wing para­graph is added: “4. When adop­ting dele­ga­ted acts pur­su­ant to para­graphs 1 and 2 con­cer­ning Arti­fi­ci­al Intel­li­gence systems which are safe­ty com­pon­ents in the mea­ning of Regu­la­ti­on (EU) YYY/XX [on Arti­fi­ci­al Intel­li­gence], the requi­re­ments set out in Tit­le III, Chap­ter 2 of that Regu­la­ti­on shall be taken into account.” (3)In Artic­le 43, the fol­lo­wing para­graph is added: “4. When adop­ting imple­men­ting acts pur­su­ant to para­graph 1 con­cer­ning Arti­fi­ci­al Intel­li­gence systems which are safe­ty com­pon­ents in the mea­ning of Regu­la­ti­on (EU) YYY/XX [on Arti­fi­ci­al Intel­li­gence], the requi­re­ments set out in Tit­le III, Chap­ter 2 of that Regu­la­ti­on shall be taken into account.” (4)In Artic­le 47, the fol­lo­wing para­graph is added: “3. When adop­ting dele­ga­ted acts pur­su­ant to para­graphs 1 and 2 con­cer­ning Arti­fi­ci­al Intel­li­gence systems which are safe­ty com­pon­ents in the mea­ning of Regu­la­ti­on (EU) YYY/XX [on Arti­fi­ci­al Intel­li­gence], the requi­re­ments set out in Tit­le III, Chap­ter 2 of that Regu­la­ti­on shall be taken into account.” (5)In Artic­le 57, the fol­lo­wing para­graph is added: “ When adop­ting tho­se imple­men­ting acts con­cer­ning Arti­fi­ci­al Intel­li­gence systems which are safe­ty com­pon­ents in the mea­ning of Regu­la­ti­on (EU) YYY/XX [on Arti­fi­ci­al Intel­li­gence], the requi­re­ments set out in Tit­le III, Chap­ter 2 of that Regu­la­ti­on shall be taken into account.” (6)In Artic­le 58, the fol­lo­wing para­graph is added: “3. When adop­ting dele­ga­ted acts pur­su­ant to para­graphs 1 and 2 con­cer­ning Arti­fi­ci­al Intel­li­gence systems which are safe­ty com­pon­ents in the mea­ning of Regu­la­ti­on (EU) YYY/XX [on Arti­fi­ci­al Intel­li­gence] , the requi­re­ments set out in Tit­le III, Chap­ter 2 of that Regu­la­ti­on shall be taken into account..” 

Artic­le 82 – Amend­ment to Regu­la­ti­on (EU) 2019/2144

In Artic­le 11 of Regu­la­ti­on (EU) 2019/2144, the fol­lo­wing para­graph is added: ”3. “When adop­ting the imple­men­ting acts pur­su­ant to para­graph 2, con­cer­ning arti­fi­ci­al intel­li­gence systems which are safe­ty com­pon­ents in the mea­ning of Regu­la­ti­on (EU) YYY/XX [on Arti­fi­ci­al Intel­li­gence] of the Euro­pean Par­lia­ment and of the Coun­cil*, the requi­re­ments set out in Tit­le III, Chap­ter 2 of that Regu­la­ti­on shall be taken into account. * Regu­la­ti­on (EU) YYY/XX [on Arti­fi­ci­al Intel­li­gence] (OJ …).. ”

Artic­le 82a – Gui­de­lines from the Com­mis­si­on on the imple­men­ta­ti­on of this Regulation

1. The Com­mis­si­on shall deve­lop gui­de­lines on the prac­ti­cal imple­men­ta­ti­on of this Regu­la­ti­on, and in par­ti­cu­lar on: (a) the appli­ca­ti­on of the requi­re­ments and obli­ga­ti­ons refer­red to in Artic­les 8 – 15 and Artic­le 28; (b) the pro­hi­bi­ted prac­ti­ces refer­red to in Artic­le 5; (c) the prac­ti­cal imple­men­ta­ti­on of the pro­vi­si­ons rela­ted to sub­stan­ti­al modi­fi­ca­ti­on; (d) the prac­ti­cal imple­men­ta­ti­on of trans­pa­ren­cy obli­ga­ti­ons laid down in Artic­le 52; (e) detail­ed infor­ma­ti­on on the rela­ti­on­ship of this Regu­la­ti­on with the legis­la­ti­on refer­red to in Annex II of this Regu­la­ti­on as well as other rele­vant Uni­on law, inclu­ding as regards con­si­sten­cy in their enforce­ment; (f) the appli­ca­ti­on of the defi­ni­ti­on of an AI system as set out in Artic­le 3(1). When issuing such gui­de­lines, the Com­mis­si­on shall pay par­ti­cu­lar atten­ti­on to the needs of SMEs inclu­ding start-ups, local public aut­ho­ri­ties and sec­tors most likely to be affec­ted by this Regu­la­ti­on. The gui­de­lines refer­red to in the first sub­pa­ra­graph shall take due account of the gene­ral­ly ack­now­led­ged sta­te of the art on AI, as well as of rele­vant har­mo­ni­s­ed stan­dards and com­mon spe­ci­fi­ca­ti­ons that are refer­red to in Artic­les 40 and 41, or of tho­se har­mo­ni­s­ed stan­dards or tech­ni­cal spe­ci­fi­ca­ti­ons that are set out pur­su­ant to Uni­on har­mo­ni­sa­ti­on law. 2. Upon request of the Mem­ber Sta­tes or the AI Office, or on its own initia­ti­ve, the Com­mis­si­on shall update alre­a­dy adopted gui­de­lines when dee­med necessary. 

Artic­le 83 – AI systems alre­a­dy pla­ced on the mar­ket or put into service

1. Wit­hout pre­ju­di­ce to the appli­ca­ti­on of Artic­le 5 as refer­red in Artic­le 85 (3) (-aa) AI systems which are com­pon­ents of the lar­ge-sca­le IT systems estab­lished by the legal acts listed in Annex IX that have been pla­ced on the mar­ket or put into ser­vice befo­re 12 months after the date of appli­ca­ti­on of this Regu­la­ti­on refer­red to in Artic­le 85(2) shall be brought into com­pli­ance with this Regu­la­ti­on by end of 2030. The requi­re­ments laid down in this Regu­la­ti­on shall be taken into account in the eva­lua­ti­on of each lar­ge-sca­le IT systems estab­lished by the legal acts listed in Annex IX to be under­ta­ken as pro­vi­ded for in tho­se respec­ti­ve acts and when­ever tho­se legal acts are repla­ced or amen­ded. 2. Wit­hout pre­ju­di­ce to the appli­ca­ti­on of Artic­le 5 as refer­red in Artic­le 85 (3) (-aa ) this Regu­la­ti­on shall app­ly to ope­ra­tors of high-risk AI systems, other than the ones refer­red to in para­graph 1, that have been pla­ced on the mar­ket or put into ser­vice befo­re [date of appli­ca­ti­on of this Regu­la­ti­on refer­red to in Artic­le 85(2)], only if, from that date, tho­se systems are sub­ject to signi­fi­cant chan­ges in their designs. In the case of high-risk AI systems inten­ded to be used by public aut­ho­ri­ties, pro­vi­ders and deployers of such systems shall take the neces­sa­ry steps to com­ply with the requi­re­ments of the pre­sent Regu­la­ti­on four years after the date of ent­ry into appli­ca­ti­on of this Regu­la­ti­on. 3. Pro­vi­ders of gene­ral pur­po­se AI models that have been pla­ced on the mar­ket befo­re [date of appli­ca­ti­on of this Regu­la­ti­on refer­red to in point a) Artic­le 85(3)] shall take the neces­sa­ry steps in order to com­ply with the obli­ga­ti­ons laid down in this Regu­la­ti­on by [2 years after the date of ent­ry into appli­ca­ti­on of this Regu­la­ti­on refer­red to in point a) of 85(3)].

Artic­le 84 – Eva­lua­ti­on and review

1. The Com­mis­si­on shall assess the need for amend­ment of the list in Annex III, the list of pro­hi­bi­ted AI prac­ti­ces in Artic­le 5, once a year fol­lo­wing the ent­ry into force of this Regu­la­ti­on, and until the end of the peri­od of the dele­ga­ti­on of power. The Com­mis­si­on shall sub­mit the fin­dings of that assess­ment to the Euro­pean Par­lia­ment and the Coun­cil. 2. By two years after the date of appli­ca­ti­on of this Regu­la­ti­on refer­red to in Artic­le 85(2) and every four years the­re­af­ter, the Com­mis­si­on shall eva­lua­te and report to the Euro­pean Par­lia­ment and to the Coun­cil on the need for amend­ment of the fol­lo­wing: – the need for exten­si­on of exi­sting area hea­dings or addi­ti­on of new area hea­dings in Annex III; – the list of AI systems requi­ring addi­tio­nal trans­pa­ren­cy mea­su­res in Artic­le 52; – the effec­ti­ve­ness of the super­vi­si­on and gover­nan­ce system. 2a. By three years after the date of appli­ca­ti­on of this Regu­la­ti­on refer­red to in Artic­le 85(3) and every four years the­re­af­ter, the Com­mis­si­on shall sub­mit a report on the eva­lua­ti­on and review of this Regu­la­ti­on to the Euro­pean Par­lia­ment and to the Coun­cil. This report shall include an assess­ment with regard to the struc­tu­re of enforce­ment and the pos­si­ble need for an Uni­on agen­cy to resol­ve any iden­ti­fi­ed short­co­mings. On the basis of the fin­dings that report shall, whe­re appro­pria­te, be accom­pa­nied by a pro­po­sal for amend­ment of this Regu­la­ti­on. The reports shall be made public. 3. The reports refer­red to in para­graph 2 shall devo­te spe­ci­fic atten­ti­on to the fol­lo­wing: (a) the sta­tus of the finan­cial, tech­ni­cal and human resour­ces of the natio­nal com­pe­tent aut­ho­ri­ties in order to effec­tively per­form the tasks assi­gned to them under this Regu­la­ti­on; (b) the sta­te of pen­al­ties, and nota­b­ly admi­ni­stra­ti­ve fines as refer­red to in Artic­le 71(1), applied by Mem­ber Sta­tes to inf­rin­ge­ments of the pro­vi­si­ons of this Regu­la­ti­on; (ba) adopted har­mo­ni­s­ed stan­dards and com­mon spe­ci­fi­ca­ti­ons deve­lo­ped to sup­port this Regu­la­ti­on; (bb) the num­ber of com­pa­nies that enter the mar­ket after the enter into appli­ca­ti­on of the regu­la­ti­on and how many of them are SMEs. 3a. By … [two years after the date of ent­ry into appli­ca­ti­on of this Regu­la­ti­on refer­red to in Artic­le 85(2)] the Com­mis­si­on shall eva­lua­te the func­tio­ning of the AI office, whe­ther the office has been given suf­fi­ci­ent powers and com­pe­ten­ces to ful­fil its tasks and whe­ther it would be rele­vant and nee­ded for the pro­per imple­men­ta­ti­on and enforce­ment of this Regu­la­ti­on to upgrade the Office and its enforce­ment com­pe­ten­ces and to increa­se its resour­ces. The Com­mis­si­on shall sub­mit this eva­lua­ti­on report to the Euro­pean Par­lia­ment and to the Coun­cil. 3a. By two years [after the date of appli­ca­ti­on of this Regu­la­ti­on refer­red to in Artic­le 85(2)] and every four years the­re­af­ter, the Com­mis­si­on shall sub­mit a report on the review of the pro­gress on the deve­lo­p­ment of stan­dar­dizati­on deli­ver­a­bles on ener­gy effi­ci­ent deve­lo­p­ment of gene­ral-pur­po­se models and asses the need for fur­ther mea­su­res or actions, inclu­ding bin­ding mea­su­res or actions. The report shall be sub­mit­ted to the Euro­pean Par­lia­ment and to the Coun­cil and it shall be made public. 4. Within … [two years after the date of appli­ca­ti­on of this Regu­la­ti­on refer­red to in Artic­le 85(2)] and every three years the­re­af­ter, the Com­mis­si­on shall eva­lua­te the impact and effec­ti­ve­ness of vol­un­t­a­ry codes of con­duct to foster the appli­ca­ti­on of the requi­re­ments set out in Tit­le III, Chap­ter 2 for AI systems other than high-risk AI systems and pos­si­bly other addi­tio­nal requi­re­ments for AI systems other than high-risk AI systems, inclu­ding as regards envi­ron­men­tal sus­taina­bi­li­ty. 5. For the pur­po­se of para­graphs 1 to 4 the Board, the Mem­ber Sta­tes and natio­nal com­pe­tent aut­ho­ri­ties shall pro­vi­de the Com­mis­si­on with infor­ma­ti­on on its request, wit­hout undue delay. 6. In car­ry­ing out the eva­lua­tions and reviews refer­red to in para­graphs 1 to 4 the Com­mis­si­on shall take into account the posi­ti­ons and fin­dings of the Board, of the Euro­pean Par­lia­ment, of the Coun­cil, and of other rele­vant bodies or sources. 7. The Com­mis­si­on shall, if neces­sa­ry, sub­mit appro­pria­te pro­po­sals to amend this Regu­la­ti­on, in par­ti­cu­lar taking into account deve­lo­p­ments in tech­no­lo­gy, the effect of AI systems on health and safe­ty, fun­da­men­tal rights and in the light of the sta­te of pro­gress in the infor­ma­ti­on socie­ty. 7a. To gui­de the eva­lua­tions and reviews refer­red to in para­graphs 1 to 4 of this Artic­le, the Office shall under­ta­ke to deve­lop an objec­ti­ve and par­ti­ci­pa­ti­ve metho­do­lo­gy for the eva­lua­ti­on of risk level based on the cri­te­ria out­lined in the rele­vant artic­les and inclu­si­on of new systems in: the list in Annex III, inclu­ding the exten­si­on of exi­sting area hea­dings or addi­ti­on of new area hea­dings in that Annex; the list of pro­hi­bi­ted prac­ti­ces laid down in Artic­le 5; and the list of AI systems requi­ring addi­tio­nal trans­pa­ren­cy mea­su­res pur­su­ant to Artic­le 52. 7b. Any amend­ment to this Regu­la­ti­on pur­su­ant to para­graph 7 of this Artic­le, or rele­vant future dele­ga­ted or imple­men­ting acts, which con­cern sec­to­ral legis­la­ti­on listed in Annex II Sec­tion B, shall take into account the regu­la­to­ry spe­ci­fi­ci­ties of each sec­tor, and exi­sting gover­nan­ce, con­for­mi­ty assess­ment and enforce­ment mecha­nisms and aut­ho­ri­ties estab­lished the­r­ein. 7c. By … [five years from the date of appli­ca­ti­on of this Regu­la­ti­on], the Com­mis­si­on shall car­ry out an assess­ment of the enforce­ment of this Regu­la­ti­on and shall report it to the Euro­pean Par­lia­ment, the Coun­cil and the Euro­pean Eco­no­mic and Social Com­mit­tee, taking into account the first years of appli­ca­ti­on of the Regu­la­ti­on. On the basis of the fin­dings that report shall, whe­re appro­pria­te, be accom­pa­nied by a pro­po­sal for amend­ment of this Regu­la­ti­on with regard to the struc­tu­re of enforce­ment and the need for an Uni­on agen­cy to resol­ve any iden­ti­fi­ed shortcomings. 

Artic­le 85 – Ent­ry into force and application

1. This Regu­la­ti­on shall enter into force on the twen­tieth day fol­lo­wing that of its publi­ca­ti­on in the Offi­ci­al Jour­nal of the Euro­pean Uni­on. 2. This Regu­la­ti­on shall app­ly from [24 months fol­lo­wing the ente­ring into force of the Regu­la­ti­on]. With regard to the obli­ga­ti­on refer­red to in Artic­le 53(1), this obli­ga­ti­on shall include eit­her that at least one regu­la­to­ry sand­box per Mem­ber Sta­te shall be ope­ra­tio­nal on this day or that the Mem­ber Sta­te par­ti­ci­pa­tes in the sand­box of ano­ther Mem­ber Sta­te * 3. By way of dero­ga­ti­on from para­graph 2: (-a) Tit­le I and II [Pro­hi­bi­ti­ons] shall app­ly from [six months fol­lo­wing the ent­ry into force of this Regu­la­ti­on]; (a) Tit­le III Chap­ter 4, Tit­le VI, Tit­le VII­Ia [GPAI], Tit­le X [Pen­al­ties] shall app­ly from [twel­ve months fol­lo­wing the ent­ry into force of this Regu­la­ti­on]; (b) Artic­le 6(1) and the cor­re­spon­ding obli­ga­ti­ons in this Regu­la­ti­on shall app­ly from [36 months fol­lo­wing the ent­ry into force of this Regu­la­ti­on]. Codes of prac­ti­ces shall be rea­dy at the latest nine months after the ent­ry into force of this Regu­la­ti­on. The AI Office shall take the neces­sa­ry steps, inclu­ding invi­ting pro­vi­ders pur­su­ant to Artic­le 52e para­graph 5. This Regu­la­ti­on shall be bin­ding in its enti­re­ty and direct­ly appli­ca­ble in all Mem­ber States. 

ANNEX II List of Uni­on har­mo­ni­sa­ti­on legislation

Part I

Sec­tion A. List of Uni­on har­mo­ni­sa­ti­on legis­la­ti­on based on the New Legis­la­ti­ve Framework 1. Direc­ti­ve 2006/42/EC of the Euro­pean Par­lia­ment and of the Coun­cil of 17 May 2006 on machi­nery, and amen­ding Direc­ti­ve 95/16/EC (OJ L 157, 9.6.2006, p. 24) [as repea­led by the Machi­nery Regu­la­ti­on]; 2. Direc­ti­ve 2009/48/EC of the Euro­pean Par­lia­ment and of the Coun­cil of 18 June 2009 on the safe­ty of toys (OJ L 170, 30.6.2009, p. 1); 3. Direc­ti­ve 2013/53/EU of the Euro­pean Par­lia­ment and of the Coun­cil of 20 Novem­ber 2013 on recrea­tio­nal craft and per­so­nal water­craft and repe­al­ing Direc­ti­ve 94/25/EC (OJ L 354, 28.12.2013, p. 90); 4. Direc­ti­ve 2014/33/EU of the Euro­pean Par­lia­ment and of the Coun­cil of 26 Febru­ary 2014 on the har­mo­ni­sa­ti­on of the laws of the Mem­ber Sta­tes rela­ting to lifts and safe­ty com­pon­ents for lifts (OJ L 96, 29.3.2014, p. 251); 5. Direc­ti­ve 2014/34/EU of the Euro­pean Par­lia­ment and of the Coun­cil of 26 Febru­ary 2014 on the har­mo­ni­sa­ti­on of the laws of the Mem­ber Sta­tes rela­ting to equip­ment and pro­tec­ti­ve systems inten­ded for use in poten­ti­al­ly explo­si­ve atmo­sphe­res (OJ L 96, 29.3.2014, p. 309); 6. Direc­ti­ve 2014/53/EU of the Euro­pean Par­lia­ment and of the Coun­cil of 16 April 2014 on the har­mo­ni­sa­ti­on of the laws of the Mem­ber Sta­tes rela­ting to the making available on the mar­ket of radio equip­ment and repe­al­ing Direc­ti­ve 1999/5/EC (OJ L 153, 22.5.2014, p. 62); 7. Direc­ti­ve 2014/68/EU of the Euro­pean Par­lia­ment and of the Coun­cil of 15 May 2014 on the har­mo­ni­sa­ti­on of the laws of the Mem­ber Sta­tes rela­ting to the making available on the mar­ket of pres­su­re equip­ment (OJ L 189, 27.6.2014, p. 164); 8. Regu­la­ti­on (EU) 2016/424 of the Euro­pean Par­lia­ment and of the Coun­cil of 9 March 2016 on cable­way instal­la­ti­ons and repe­al­ing Direc­ti­ve 2000/9/EC (OJ L 81, 31.3.2016, p. 1); 9. Regu­la­ti­on (EU) 2016/425 of the Euro­pean Par­lia­ment and of the Coun­cil of 9 March 2016 on per­so­nal pro­tec­ti­ve equip­ment and repe­al­ing Coun­cil Direc­ti­ve 89/686/EEC (OJ L 81, 31.3.2016, p. 51); 10. Regu­la­ti­on (EU) 2016/426 of the Euro­pean Par­lia­ment and of the Coun­cil of 9 March 2016 on appli­ances bur­ning gas­eous fuels and repe­al­ing Direc­ti­ve 2009/142/EC (OJ L 81, 31.3.2016, p. 99); 11. Regu­la­ti­on (EU) 2017/745 of the Euro­pean Par­lia­ment and of the Coun­cil of 5 April 2017 on medi­cal devices, amen­ding Direc­ti­ve 2001/83/EC, Regu­la­ti­on (EC) No 178/2002 and Regu­la­ti­on (EC) No 1223/2009 and repe­al­ing Coun­cil Direc­ti­ves 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1; 12. Regu­la­ti­on (EU) 2017/746 of the Euro­pean Par­lia­ment and of the Coun­cil of 5 April 2017 on in vitro dia­gno­stic medi­cal devices and repe­al­ing Direc­ti­ve 98/79/EC and Com­mis­si­on Decis­i­on 2010/227/EU (OJ L 117, 5.5.2017, p. 176). 

Part II

Sec­tion B. List of other Uni­on har­mo­ni­sa­ti­on legislation 13. Regu­la­ti­on (EC) No 300/2008 of the Euro­pean Par­lia­ment and of the Coun­cil of 11 March 2008 on com­mon rules in the field of civil avia­ti­on secu­ri­ty and repe­al­ing Regu­la­ti­on (EC) No 2320/2002 (OJ L 97, 9.4.2008, p. 72). 14. Regu­la­ti­on (EU) No 168/2013 of the Euro­pean Par­lia­ment and of the Coun­cil of 15 Janu­ary 2013 on the appr­oval and mar­ket sur­veil­lan­ce of two- or three-wheel vehic­les and quad­ri­cy­cles (OJ L 60, 2.3.2013, p. 52); 15. Regu­la­ti­on (EU) No 167/2013 of the Euro­pean Par­lia­ment and of the Coun­cil of 5 Febru­ary 2013 on the appr­oval and mar­ket sur­veil­lan­ce of agri­cul­tu­ral and fore­stry vehic­les (OJ L 60, 2.3.2013, p. 1); 16. Direc­ti­ve 2014/90/EU of the Euro­pean Par­lia­ment and of the Coun­cil of 23 July 2014 on mari­ne equip­ment and repe­al­ing Coun­cil Direc­ti­ve 96/98/EC (OJ L 257, 28.8.2014, p. 146); 17. Direc­ti­ve (EU) 2016/797 of the Euro­pean Par­lia­ment and of the Coun­cil of 11 May 2016 on the inter­ope­ra­bi­li­ty of the rail system within the Euro­pean Uni­on (OJ L 138, 26.5.2016, p. 44). 18. Regu­la­ti­on (EU) 2018/858 of the Euro­pean Par­lia­ment and of the Coun­cil of 30 May 2018 on the appr­oval and mar­ket sur­veil­lan­ce of motor vehic­les and their trai­lers, and of systems, com­pon­ents and sepa­ra­te tech­ni­cal units inten­ded for such vehic­les, amen­ding Regu­la­ti­ons (EC) No 715/2007 and (EC) No 595/2009 and repe­al­ing Direc­ti­ve 2007/46/EC (OJ L 151, 14.6.2018, p. 1); 18a. Regu­la­ti­on (EU) 2019/2144 of the Euro­pean Par­lia­ment and of the Coun­cil of 27 Novem­ber 2019 on type-appr­oval requi­re­ments for motor vehic­les and their trai­lers, and systems, com­pon­ents and sepa­ra­te tech­ni­cal units inten­ded for such vehic­les, as regards their gene­ral safe­ty and the pro­tec­tion of vehic­le occu­pants and vul­nerable road users, amen­ding Regu­la­ti­on (EU) 2018/858 of the Euro­pean Par­lia­ment and of the Coun­cil and repe­al­ing Regu­la­ti­ons (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of the Euro­pean Par­lia­ment and of the Coun­cil and Com­mis­si­on Regu­la­ti­ons (EC) No 631/2009, (EU) No 406/2010, (EU) No 672/2010, (EU) No 1003/2010, (EU) No 1005/2010, (EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU) No 109/2011, (EU) No 458/2011, (EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) No 1230/2012 and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1); 19. Regu­la­ti­on (EU) 2018/1139 of the Euro­pean Par­lia­ment and of the Coun­cil of 4 July 2018 on com­mon rules in the field of civil avia­ti­on and estab­li­shing a Euro­pean Uni­on Avia­ti­on Safe­ty Agen­cy, and amen­ding Regu­la­ti­ons (EC) No 2111/2005, (EC) No 1008/2008, (EU) No 996/2010, (EU) No 376/2014 and Direc­ti­ves 2014/30/EU and 2014/53/EU of the Euro­pean Par­lia­ment and of the Coun­cil, and repe­al­ing Regu­la­ti­ons (EC) No 552/2004 and (EC) No 216/2008 of the Euro­pean Par­lia­ment and of the Coun­cil and Coun­cil Regu­la­ti­on (EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1), in so far as the design, pro­duc­tion and pla­cing on the mar­ket of air­crafts refer­red to in points (a) and (b) of Artic­le 2(1) the­reof, whe­re it con­cerns unman­ned air­craft and their engi­nes, pro­pel­lers, parts and equip­ment to con­trol them remo­te­ly, are concerned. 

ANNEX IIa

List of cri­mi­nal offen­ces refer­red to in Artic­le 5 (1)(iii)

– ter­ro­rism; – traf­ficking in human beings; – sexu­al explo­ita­ti­on of child­ren and child por­no­gra­phy; – illi­cit traf­ficking in nar­co­tic drugs and psy­cho­tro­pic sub­stances; – illi­cit traf­ficking in wea­pons, muni­ti­ons and explo­si­ves; – mur­der, grie­vous bodi­ly inju­ry; – illi­cit trade in human organs and tissue; – illi­cit traf­ficking in nuclear or radio­ac­ti­ve mate­ri­als; – kid­nap­ping, ille­gal restraint and hosta­ge-taking; – cri­mes within the juris­dic­tion of the Inter­na­tio­nal Cri­mi­nal Court; – unlawful sei­zu­re of aircraft/ships; – rape; – envi­ron­men­tal crime; – orga­ni­s­ed or armed rob­be­ry; – sabo­ta­ge; – par­ti­ci­pa­ti­on in a cri­mi­nal orga­ni­sa­ti­on invol­ved in one or more offen­ces listed above. 

ANNEX III High-risk AI systems refer­red to in artic­le 6(2)

High-risk AI systems pur­su­ant to Artic­le 6(2) are the AI systems listed in any of the fol­lo­wing areas:

1. Bio­me­trics, inso­far as their use is per­mit­ted under rele­vant Uni­on or natio­nal law: (a) Remo­te bio­me­tric iden­ti­fi­ca­ti­on systems. This shall not include AI systems inten­ded to be used for bio­me­tric veri­fi­ca­ti­on who­se sole pur­po­se is to con­firm that a spe­ci­fic natu­ral per­son is the per­son he or she claims to be; (aa) AI systems inten­ded to be used for bio­me­tric cate­go­ri­sa­ti­on, accor­ding to sen­si­ti­ve or pro­tec­ted attri­bu­tes or cha­rac­te­ri­stics based on the infe­rence of tho­se attri­bu­tes or cha­rac­te­ri­stics; (ab) AI systems inten­ded to be used for emo­ti­on reco­gni­ti­on. 2. Cri­ti­cal infras­truc­tu­re: (a) AI systems inten­ded to be used as safe­ty com­pon­ents in the manage­ment and ope­ra­ti­on of cri­ti­cal digi­tal infras­truc­tu­re, road traf­fic and the sup­p­ly of water, gas, hea­ting and elec­tri­ci­ty. 3. Edu­ca­ti­on and voca­tio­nal trai­ning: (a) AI systems inten­ded to be used to deter­mi­ne access or admis­si­on or to assign natu­ral per­sons to edu­ca­tio­nal and voca­tio­nal trai­ning insti­tu­ti­ons at all levels; (b) AI systems inten­ded to be used to eva­lua­te lear­ning out­co­mes, inclu­ding when tho­se out­co­mes are used to steer the lear­ning pro­cess of natu­ral per­sons in edu­ca­tio­nal and voca­tio­nal trai­ning insti­tu­ti­ons at all levels; (ba) AI systems inten­ded to be used for the pur­po­se of asses­sing the appro­pria­te level of edu­ca­ti­on that indi­vi­du­al will recei­ve or will be able to access, in the con­text of/within edu­ca­ti­on and voca­tio­nal trai­ning insti­tu­ti­on; (bb) AI systems inten­ded to be used for moni­to­ring and detec­ting pro­hi­bi­ted beha­viour of stu­dents during tests in the con­text of/within edu­ca­ti­on and voca­tio­nal trai­ning insti­tu­ti­ons. 4. Employment, workers manage­ment and access to self-employment: (a) AI systems inten­ded to be used for recruit­ment or sel­ec­tion of natu­ral per­sons, nota­b­ly to place tar­ge­ted job adver­ti­se­ments, to ana­ly­se and fil­ter job appli­ca­ti­ons, and to eva­lua­te can­di­da­tes; (b) AI inten­ded to be used to make decis­i­ons affec­ting terms of the work rela­ted rela­ti­on­ships, pro­mo­ti­on and ter­mi­na­ti­on of work-rela­ted con­trac­tu­al rela­ti­on­ships, to allo­ca­te tasks based on indi­vi­du­al beha­viour or per­so­nal traits or cha­rac­te­ri­stics and to moni­tor and eva­lua­te per­for­mance and beha­viour of per­sons in such rela­ti­on­ships. 5. Access to and enjoy­ment of essen­ti­al pri­va­te ser­vices and essen­ti­al public ser­vices and bene­fits: (a) AI systems inten­ded to be used by public aut­ho­ri­ties or on behalf of public aut­ho­ri­ties to eva­lua­te the eli­gi­bi­li­ty of natu­ral per­sons for essen­ti­al public assi­stance bene­fits and ser­vices, inclu­ding heal­th­ca­re ser­vices, as well as to grant, redu­ce, revo­ke, or recla­im such bene­fits and ser­vices; (b) AI systems inten­ded to be used to eva­lua­te the cre­dit­wort­hi­ness of natu­ral per­sons or estab­lish their cre­dit score, with the excep­ti­on of AI systems used for the pur­po­se of detec­ting finan­cial fraud; (c) AI systems inten­ded to eva­lua­te and clas­si­fy emer­gen­cy calls by natu­ral per­sons or to be used to dis­patch, or to estab­lish prio­ri­ty in the dis­patching of emer­gen­cy first respon­se ser­vices, inclu­ding by poli­ce, fire­figh­ters and medi­cal aid, as well as of emer­gen­cy heal­th­ca­re pati­ent tria­ge systems; (ca) AI systems inten­ded to be used for risk assess­ment and pri­cing in rela­ti­on to natu­ral per­sons in the case of life and health insu­rance. 6. Law enforce­ment, inso­far as their use is per­mit­ted under rele­vant Uni­on or natio­nal law: (a) AI systems inten­ded to be used by or on behalf of law enforce­ment aut­ho­ri­ties, or by Uni­on insti­tu­ti­ons, agen­ci­es, offices or bodies in sup­port of law enforce­ment aut­ho­ri­ties or on their behalf to assess the risk of a natu­ral per­son to beco­me a vic­tim of cri­mi­nal offen­ces; (b) AI systems inten­ded to be used by or on behalf of law enforce­ment aut­ho­ri­ties or by Uni­on insti­tu­ti­ons, bodies and agen­ci­es in sup­port of Law enforce­ment aut­ho­ri­ties as poly­graphs and simi­lar tools; (d) AI systems inten­ded to be used by or on behalf of law enforce­ment aut­ho­ri­ties, or by Uni­on insti­tu­ti­ons, agen­ci­es, offices or bodies in sup­port of law enforce­ment aut­ho­ri­ties to eva­lua­te the relia­bi­li­ty of evi­dence in the cour­se of inve­sti­ga­ti­on or pro­se­cu­ti­on of cri­mi­nal offen­ces; (e) AI systems inten­ded to be used by law enforce­ment aut­ho­ri­ties or on their behalf or by Uni­on insti­tu­ti­ons, agen­ci­es, offices or bodies in sup­port of law enforce­ment aut­ho­ri­ties for asses­sing the risk of a natu­ral per­son of offen­ding or re-offen­ding not sole­ly based on pro­fil­ing of natu­ral per­sons as refer­red to in Artic­le 3(4) of Direc­ti­ve (EU) 2016/680 or to assess per­so­na­li­ty traits and cha­rac­te­ri­stics or past cri­mi­nal beha­viour of natu­ral per­sons or groups; (f) AI systems inten­ded to be used by or on behalf of law enforce­ment aut­ho­ri­ties or by Uni­on agen­ci­es insti­tu­ti­ons, agen­ci­es, offices or bodies in sup­port of law enforce­ment aut­ho­ri­ties for pro­fil­ing of natu­ral per­sons as refer­red to in Artic­le 3(4) of Direc­ti­ve (EU) 2016/680 in the cour­se of detec­tion, inve­sti­ga­ti­on or pro­se­cu­ti­on of cri­mi­nal offen­ces. 7. Migra­ti­on, asyl­um and bor­der con­trol manage­ment, inso­far as their use is per­mit­ted under rele­vant Uni­on or natio­nal law: (a) AI systems inten­ded to be used by com­pe­tent public aut­ho­ri­ties as poly­graphs and simi­lar tools; (b) AI systems inten­ded to be used by or on behalf of com­pe­tent public aut­ho­ri­ties or by Uni­on agen­ci­es, offices or bodies to assess a risk, inclu­ding a secu­ri­ty risk, a risk of irre­gu­lar migra­ti­on, or a health risk, posed by a natu­ral per­son who intends to enter or has ente­red into the ter­ri­to­ry of a Mem­ber Sta­te; (d) AI systems inten­ded to be used by or on behalf of com­pe­tent public aut­ho­ri­ties or by Uni­on agen­ci­es, offices or bodies to assist com­pe­tent public aut­ho­ri­ties for the exami­na­ti­on of appli­ca­ti­ons for asyl­um, visa and resi­dence per­mits and asso­cia­ted com­plaints with regard to the eli­gi­bi­li­ty of the natu­ral per­sons app­ly­ing for a sta­tus, inclu­ding rela­ted assess­ment of the relia­bi­li­ty of evi­dence; (da) AI systems inten­ded to be used by or on behalf of com­pe­tent public aut­ho­ri­ties, inclu­ding Uni­on agen­ci­es, offices or bodies, in the con­text of migra­ti­on, asyl­um and bor­der con­trol manage­ment, for the pur­po­se of detec­ting, reco­g­nis­ing or iden­ti­fy­ing natu­ral per­sons with the excep­ti­on of veri­fi­ca­ti­on of tra­vel docu­ments. 8. Admi­ni­stra­ti­on of justi­ce and demo­cra­tic pro­ce­s­ses: (a) AI systems inten­ded to be used by a judi­cial aut­ho­ri­ty or on their behalf to assist a judi­cial aut­ho­ri­ty in rese­ar­ching and inter­pre­ting facts and the law and in app­ly­ing the law to a con­cre­te set of facts or used in a simi­lar way in alter­na­ti­ve dis­pu­te reso­lu­ti­on; (aa) AI systems inten­ded to be used for influen­cing the out­co­me of an elec­tion or refe­ren­dum or the voting beha­viour of natu­ral per­sons in the exer­cise of their vote in elec­tions or refe­ren­da. This does not include AI systems who­se out­put natu­ral per­sons are not direct­ly expo­sed to, such as tools used to orga­ni­se, opti­mi­se and struc­tu­re poli­ti­cal cam­paigns from an admi­ni­stra­ti­ve and logi­stic point of view. 

ANNEX IV Tech­ni­cal docu­men­ta­ti­on refer­red to in artic­le 11(1)

The tech­ni­cal docu­men­ta­ti­on refer­red to in Artic­le 11(1) shall con­tain at least the fol­lo­wing infor­ma­ti­on, as appli­ca­ble to the rele­vant AI system:

1. A gene­ral descrip­ti­on of the AI system inclu­ding: (a) its inten­ded pur­po­se, the name of the pro­vi­der and the ver­si­on of the system reflec­ting its rela­ti­on to pre­vious ver­si­ons; (b) how the AI system inter­acts or can be used to inter­act with hard­ware or soft­ware, inclu­ding other AI systems, that are not part of the AI system its­elf, whe­re appli­ca­ble; (c) the ver­si­ons of rele­vant soft­ware or firm­ware and any requi­re­ment rela­ted to ver­si­on update; (d) the descrip­ti­on of all forms in which the AI system is pla­ced on the mar­ket or put into ser­vice (e.g. soft­ware packa­ge embedded into hard­ware, down­loa­da­ble, API etc.); (e) the descrip­ti­on of hard­ware on which the AI system is inten­ded to run; (f) whe­re the AI system is a com­po­nent of pro­ducts, pho­to­graphs or illu­stra­ti­ons show­ing exter­nal fea­tures, mar­king and inter­nal lay­out of tho­se pro­ducts; (fa) a basic descrip­ti­on of the user-inter­face pro­vi­ded to the deployer; (g) ins­truc­tions of use for the deployer and a basic descrip­ti­on of the user-inter­face pro­vi­ded to the deployer whe­re appli­ca­ble. 2. A detail­ed descrip­ti­on of the ele­ments of the AI system and of the pro­cess for its deve­lo­p­ment, inclu­ding: (a) the methods and steps per­for­med for the deve­lo­p­ment of the AI system, inclu­ding, whe­re rele­vant, recour­se to pre-trai­ned systems or tools pro­vi­ded by third par­ties and how the­se have been used, inte­gra­ted or modi­fi­ed by the pro­vi­der; (b) the design spe­ci­fi­ca­ti­ons of the system, name­ly the gene­ral logic of the AI system and of the algo­rith­ms; the key design choices inclu­ding the ratio­na­le and assump­ti­ons made, also with regard to per­sons or groups of per­sons on which the system is inten­ded to be used; the main clas­si­fi­ca­ti­on choices; what the system is desi­gned to opti­mi­se for and the rele­van­ce of the dif­fe­rent para­me­ters; the descrip­ti­on of the expec­ted out­put and out­put qua­li­ty of the system; the decis­i­ons about any pos­si­ble trade-off made regar­ding the tech­ni­cal solu­ti­ons adopted to com­ply with the requi­re­ments set out in Tit­le III, Chap­ter 2; (c) the descrip­ti­on of the system archi­tec­tu­re explai­ning how soft­ware com­pon­ents build on or feed into each other and inte­gra­te into the over­all pro­ce­s­sing; the com­pu­ta­tio­nal resour­ces used to deve­lop, train, test and vali­da­te the AI system; (d) whe­re rele­vant, the data requi­re­ments in terms of datas­heets describ­ing the trai­ning metho­do­lo­gies and tech­ni­ques and the trai­ning data sets used, inclu­ding a gene­ral descrip­ti­on of the­se data sets, infor­ma­ti­on about their pro­ven­an­ce, scope and main cha­rac­te­ri­stics; how the data was obtai­ned and sel­ec­ted; label­ling pro­ce­du­res (e.g. for super­vi­sed lear­ning), data clea­ning metho­do­lo­gies (e.g. out­liers detec­tion); (e) assess­ment of the human over­sight mea­su­res nee­ded in accordance with Artic­le 14, inclu­ding an assess­ment of the tech­ni­cal mea­su­res nee­ded to faci­li­ta­te the inter­pre­ta­ti­on of the out­puts of AI systems by the deployers, in accordance with Artic­les 13(3)(d); (f) whe­re appli­ca­ble, a detail­ed descrip­ti­on of pre-deter­mi­ned chan­ges to the AI system and its per­for­mance, tog­e­ther with all the rele­vant infor­ma­ti­on rela­ted to the tech­ni­cal solu­ti­ons adopted to ensu­re con­ti­nuous com­pli­ance of the AI system with the rele­vant requi­re­ments set out in Tit­le III, Chap­ter 2; (g) the vali­da­ti­on and test­ing pro­ce­du­res used, inclu­ding infor­ma­ti­on about the vali­da­ti­on and test­ing data used and their main cha­rac­te­ri­stics; metrics used to mea­su­re accu­ra­cy, robust­ness and com­pli­ance with other rele­vant requi­re­ments set out in Tit­le III, Chap­ter 2 as well as poten­ti­al­ly dis­cri­mi­na­to­ry impacts; test logs and all test reports dated and signed by the respon­si­ble per­sons, inclu­ding with regard to pre- deter­mi­ned chan­ges as refer­red to under point (f); (ga) cyber­se­cu­ri­ty mea­su­res put in place. 3. Detail­ed infor­ma­ti­on about the moni­to­ring, func­tio­ning and con­trol of the AI system, in par­ti­cu­lar with regard to: its capa­bi­li­ties and limi­ta­ti­ons in per­for­mance, inclu­ding the degrees of accu­ra­cy for spe­ci­fic per­sons or groups of per­sons on which the system is inten­ded to be used and the over­all expec­ted level of accu­ra­cy in rela­ti­on to its inten­ded pur­po­se; the fore­seeable unin­ten­ded out­co­mes and sources of risks to health and safe­ty, fun­da­men­tal rights and dis­cri­mi­na­ti­on in view of the inten­ded pur­po­se of the AI system; the human over­sight mea­su­res nee­ded in accordance with Artic­le 14, inclu­ding the tech­ni­cal mea­su­res put in place to faci­li­ta­te the inter­pre­ta­ti­on of the out­puts of AI systems by the deployers; spe­ci­fi­ca­ti­ons on input data, as appro­pria­te; 3. A descrip­ti­on of the appro­pria­ten­ess of the per­for­mance metrics for the spe­ci­fic AI system; 4. A detail­ed descrip­ti­on of the risk manage­ment system in accordance with Artic­le 9; 5. A descrip­ti­on of rele­vant chan­ges made by the pro­vi­der to the system through its life­cy­cle; 6. A list of the har­mo­ni­s­ed stan­dards applied in full or in part the refe­ren­ces of which have been published in the Offi­ci­al Jour­nal of the Euro­pean Uni­on; whe­re no such har­mo­ni­s­ed stan­dards have been applied, a detail­ed descrip­ti­on of the solu­ti­ons adopted to meet the requi­re­ments set out in Tit­le III, Chap­ter 2, inclu­ding a list of other rele­vant stan­dards and tech­ni­cal spe­ci­fi­ca­ti­ons applied; 7. A copy of the EU decla­ra­ti­on of con­for­mi­ty; 8. A detail­ed descrip­ti­on of the system in place to eva­lua­te the AI system per­for­mance in the post-mar­ket pha­se in accordance with Artic­le 61, inclu­ding the post-mar­ket moni­to­ring plan refer­red to in Artic­le 61(3).

ANNEX V EU decla­ra­ti­on of conformity

The EU decla­ra­ti­on of con­for­mi­ty refer­red to in Artic­le 48, shall con­tain all of the fol­lo­wing information:

1. AI system name and type and any addi­tio­nal unam­bi­guous refe­rence allo­wing iden­ti­fi­ca­ti­on and tracea­bi­li­ty of the AI system; 2. Name and address of the pro­vi­der or, whe­re appli­ca­ble, their aut­ho­ri­sed repre­sen­ta­ti­ve; 3. A state­ment that the EU decla­ra­ti­on of con­for­mi­ty is issued under the sole respon­si­bi­li­ty of the pro­vi­der; 4. A state­ment that the AI system in que­sti­on is in con­for­mi­ty with this Regu­la­ti­on and, if appli­ca­ble, with any other rele­vant Uni­on legis­la­ti­on that pro­vi­des for the issuing of an EU decla­ra­ti­on of con­for­mi­ty; 4a. Whe­re an AI system invol­ves the pro­ce­s­sing of per­so­nal data, a state­ment that that AI system com­plies with Regu­la­ti­ons (EU) 2016/679 and (EU) 2018/1725 and Direc­ti­ve (EU) 2016/680; 5. Refe­ren­ces to any rele­vant har­mo­ni­s­ed stan­dards used or any other com­mon spe­ci­fi­ca­ti­on in rela­ti­on to which con­for­mi­ty is declared; 6. Whe­re appli­ca­ble, the name and iden­ti­fi­ca­ti­on num­ber of the noti­fi­ed body, a descrip­ti­on of the con­for­mi­ty assess­ment pro­ce­du­re per­for­med and iden­ti­fi­ca­ti­on of the cer­ti­fi­ca­te issued; 7. Place and date of issue of the decla­ra­ti­on, name and func­tion of the per­son who signed it as well as an indi­ca­ti­on for, and on behalf of whom, that per­son signed, signa­tu­re.

ANNEX VI Con­for­mi­ty assess­ment pro­ce­du­re based on inter­nal control

1. The con­for­mi­ty assess­ment pro­ce­du­re based on inter­nal con­trol is the con­for­mi­ty assess­ment pro­ce­du­re based on points 2 to 4. 2. The pro­vi­der veri­fi­es that the estab­lished qua­li­ty manage­ment system is in com­pli­ance with the requi­re­ments of Artic­le 17. 3. The pro­vi­der exami­nes the infor­ma­ti­on con­tai­ned in the tech­ni­cal docu­men­ta­ti­on in order to assess the com­pli­ance of the AI system with the rele­vant essen­ti­al requi­re­ments set out in Tit­le III, Chap­ter 2. 4. The pro­vi­der also veri­fi­es that the design and deve­lo­p­ment pro­cess of the AI system and its post-mar­ket moni­to­ring as refer­red to in Artic­le 61 is con­si­stent with the tech­ni­cal documentation. 

ANNEX VII Con­for­mi­ty based on assess­ment of qua­li­ty manage­ment system and assess­ment of tech­ni­cal documentation

1. Introduction

Con­for­mi­ty based on assess­ment of qua­li­ty manage­ment system and assess­ment of the tech­ni­cal docu­men­ta­ti­on is the con­for­mi­ty assess­ment pro­ce­du­re based on points 2 to 5.

2. Overview

The appro­ved qua­li­ty manage­ment system for the design, deve­lo­p­ment and test­ing of AI systems pur­su­ant to Artic­le 17 shall be exami­ned in accordance with point 3 and shall be sub­ject to sur­veil­lan­ce as spe­ci­fi­ed in point 5. The tech­ni­cal docu­men­ta­ti­on of the AI system shall be exami­ned in accordance with point 4. [/expand]

3. Qua­li­ty manage­ment system

3. 1. The appli­ca­ti­on of the pro­vi­der shall include: (a) the name and address of the pro­vi­der and, if the appli­ca­ti­on is lodged by the aut­ho­ri­sed repre­sen­ta­ti­ve, their name and address as well; (b) the list of AI systems cover­ed under the same qua­li­ty manage­ment system; (c) the tech­ni­cal docu­men­ta­ti­on for each AI system cover­ed under the same qua­li­ty manage­ment system; (d) the docu­men­ta­ti­on con­cer­ning the qua­li­ty manage­ment system which shall cover all the aspects listed under Artic­le 17; (e) a descrip­ti­on of the pro­ce­du­res in place to ensu­re that the qua­li­ty manage­ment system remains ade­qua­te and effec­ti­ve; (f) a writ­ten decla­ra­ti­on that the same appli­ca­ti­on has not been lodged with any other noti­fi­ed body. 3. 2. The qua­li­ty manage­ment system shall be asses­sed by the noti­fi­ed body, which shall deter­mi­ne whe­ther it satis­fies the requi­re­ments refer­red to in Artic­le 17. The decis­i­on shall be noti­fi­ed to the pro­vi­der or its aut­ho­ri­sed repre­sen­ta­ti­ve. The noti­fi­ca­ti­on shall con­tain the con­clu­si­ons of the assess­ment of the qua­li­ty manage­ment system and the rea­so­ned assess­ment decis­i­on. 3. 3. The qua­li­ty manage­ment system as appro­ved shall con­ti­n­ue to be imple­men­ted and main­tai­ned by the pro­vi­der so that it remains ade­qua­te and effi­ci­ent. 3. 4. Any inten­ded chan­ge to the appro­ved qua­li­ty manage­ment system or the list of AI systems cover­ed by the lat­ter shall be brought to the atten­ti­on of the noti­fi­ed body by the pro­vi­der. The pro­po­sed chan­ges shall be exami­ned by the noti­fi­ed body, which shall deci­de whe­ther the modi­fi­ed qua­li­ty manage­ment system con­ti­nues to satis­fy the requi­re­ments refer­red to in point 3.2 or whe­ther a reas­sess­ment is neces­sa­ry. The noti­fi­ed body shall noti­fy the pro­vi­der of its decis­i­on. The noti­fi­ca­ti­on shall con­tain the con­clu­si­ons of the exami­na­ti­on of the chan­ges and the rea­so­ned assess­ment decis­i­on.

4. Con­trol of the tech­ni­cal documentation

4. 1. In addi­ti­on to the appli­ca­ti­on refer­red to in point 3, an appli­ca­ti­on with a noti­fi­ed body of their choice shall be lodged by the pro­vi­der for the assess­ment of the tech­ni­cal docu­men­ta­ti­on rela­ting to the AI system which the pro­vi­der intends to place on the mar­ket or put into ser­vice and which is cover­ed by the qua­li­ty manage­ment system refer­red to under point 3. 4. 2. The appli­ca­ti­on shall include: (a) the name and address of the pro­vi­der; (b) a writ­ten decla­ra­ti­on that the same appli­ca­ti­on has not been lodged with any other noti­fi­ed body; (c) the tech­ni­cal docu­men­ta­ti­on refer­red to in Annex IV. 4. 3. The tech­ni­cal docu­men­ta­ti­on shall be exami­ned by the noti­fi­ed body. Whe­re rele­vant and limi­t­ed to what is neces­sa­ry to ful­fil their tasks, the noti­fi­ed body shall be gran­ted full access to the trai­ning, vali­da­ti­on, and test­ing data­sets used, inclu­ding, whe­re appro­pria­te and sub­ject to secu­ri­ty safe­guards, through appli­ca­ti­on pro­gramming inter­faces (API) or other rele­vant tech­ni­cal means and tools enab­ling remo­te access. 4. 4. In exami­ning the tech­ni­cal docu­men­ta­ti­on, the noti­fi­ed body may requi­re that the pro­vi­der sup­plies fur­ther evi­dence or car­ri­es out fur­ther tests so as to enable a pro­per assess­ment of con­for­mi­ty of the AI system with the requi­re­ments set out in Tit­le III, Chap­ter 2. When­ever the noti­fi­ed body is not satis­fied with the tests car­ri­ed out by the pro­vi­der, the noti­fi­ed body shall direct­ly car­ry out ade­qua­te tests, as appro­pria­te. 4. 5. Whe­re neces­sa­ry to assess the con­for­mi­ty of the high-risk AI system with the requi­re­ments set out in Tit­le III, Chap­ter 2, after all other rea­sonable ways to veri­fy con­for­mi­ty have been exhau­sted and have pro­ven to be insuf­fi­ci­ent, and upon a rea­so­ned request, the noti­fi­ed body shall also be gran­ted access to the trai­ning and trai­ned models of the AI system, inclu­ding its rele­vant para­me­ters. Such access shall be sub­ject to exi­sting Uni­on law on the pro­tec­tion of intellec­tu­al pro­per­ty and trade secrets. 4. 6. The decis­i­on shall be noti­fi­ed to the pro­vi­der or its aut­ho­ri­sed repre­sen­ta­ti­ve. The noti­fi­ca­ti­on shall con­tain the con­clu­si­ons of the assess­ment of the tech­ni­cal docu­men­ta­ti­on and the rea­so­ned assess­ment decis­i­on. Whe­re the AI system is in con­for­mi­ty with the requi­re­ments set out in Tit­le III, Chap­ter 2, an EU tech­ni­cal docu­men­ta­ti­on assess­ment cer­ti­fi­ca­te shall be issued by the noti­fi­ed body. The cer­ti­fi­ca­te shall indi­ca­te the name and address of the pro­vi­der, the con­clu­si­ons of the exami­na­ti­on, the con­di­ti­ons (if any) for its vali­di­ty and the data neces­sa­ry for the iden­ti­fi­ca­ti­on of the AI system. The cer­ti­fi­ca­te and its anne­xes shall con­tain all rele­vant infor­ma­ti­on to allow the con­for­mi­ty of the AI system to be eva­lua­ted, and to allow for con­trol of the AI system while in use, whe­re appli­ca­ble. Whe­re the AI system is not in con­for­mi­ty with the requi­re­ments set out in Tit­le III, Chap­ter 2, the noti­fi­ed body shall refu­se to issue an EU tech­ni­cal docu­men­ta­ti­on assess­ment cer­ti­fi­ca­te and shall inform the appli­cant accor­din­gly, giving detail­ed rea­sons for its refu­sal. Whe­re the AI system does not meet the requi­re­ment rela­ting to the data used to train it, re- trai­ning of the AI system will be nee­ded pri­or to the appli­ca­ti­on for a new con­for­mi­ty assess­ment. In this case, the rea­so­ned assess­ment decis­i­on of the noti­fi­ed body refu­sing to issue the EU tech­ni­cal docu­men­ta­ti­on assess­ment cer­ti­fi­ca­te shall con­tain spe­ci­fic con­side­ra­ti­ons on the qua­li­ty data used to train the AI system, nota­b­ly on the rea­sons for non-com­pli­ance. 4. 7. Any chan­ge to the AI system that could affect the com­pli­ance of the AI system with the requi­re­ments or its inten­ded pur­po­se shall be appro­ved by the noti­fi­ed body which issued the EU tech­ni­cal docu­men­ta­ti­on assess­ment cer­ti­fi­ca­te. The pro­vi­der shall inform such noti­fi­ed body of its inten­ti­on to intro­du­ce any of the abo­ve-men­tio­ned chan­ges or if it beco­mes other­wi­se awa­re of the occur­rence of such chan­ges. The inten­ded chan­ges shall be asses­sed by the noti­fi­ed body which shall deci­de whe­ther tho­se chan­ges requi­re a new con­for­mi­ty assess­ment in accordance with Artic­le 43(4) or whe­ther they could be addres­sed by means of a sup­ple­ment to the EU tech­ni­cal docu­men­ta­ti­on assess­ment cer­ti­fi­ca­te. In the lat­ter case, the noti­fi­ed body shall assess the chan­ges, noti­fy the pro­vi­der of its decis­i­on and, whe­re the chan­ges are appro­ved, issue to the pro­vi­der a sup­ple­ment to the EU tech­ni­cal docu­men­ta­ti­on assess­ment certificate. 

5. Sur­veil­lan­ce of the appro­ved qua­li­ty manage­ment system

5. 1. The pur­po­se of the sur­veil­lan­ce car­ri­ed out by the noti­fi­ed body refer­red to in Point 3 is to make sure that the pro­vi­der duly ful­fils the terms and con­di­ti­ons of the appro­ved qua­li­ty manage­ment system. 5. 2. For assess­ment pur­po­ses, the pro­vi­der shall allow the noti­fi­ed body to access the pre­mi­ses whe­re the design, deve­lo­p­ment, test­ing of the AI systems is taking place. The pro­vi­der shall fur­ther share with the noti­fi­ed body all neces­sa­ry infor­ma­ti­on. 5. 3. The noti­fi­ed body shall car­ry out peri­odic audits to make sure that the pro­vi­der main­ta­ins and ap4plies the qua­li­ty manage­ment system and shall pro­vi­de the pro­vi­der with an audit report. In the con­text of tho­se audits, the noti­fi­ed body may car­ry out addi­tio­nal tests of the AI systems for which an EU tech­ni­cal docu­men­ta­ti­on assess­ment cer­ti­fi­ca­te was issued. 

ANNEX VIII Infor­ma­ti­on to be sub­mit­ted upon the regi­stra­ti­on of high-risk AI systems in accordance with Artic­le 51

SECTION A – Infor­ma­ti­on to be sub­mit­ted by pro­vi­ders of high-risk AI systems in accordance with Artic­le 51(1)

The fol­lo­wing infor­ma­ti­on shall be pro­vi­ded and the­re­af­ter kept up to date with regard to high-risk AI systems to be regi­stered in accordance with Artic­le 51(1): 1. Name, address and cont­act details of the pro­vi­der; 2. Whe­re sub­mis­si­on of infor­ma­ti­on is car­ri­ed out by ano­ther per­son on behalf of the pro­vi­der, the name, address and cont­act details of that per­son; 3. Name, address and cont­act details of the aut­ho­ri­sed repre­sen­ta­ti­ve, whe­re appli­ca­ble; 4. AI system trade name and any addi­tio­nal unam­bi­guous refe­rence allo­wing iden­ti­fi­ca­ti­on and tracea­bi­li­ty of the AI system; 5. Descrip­ti­on of the inten­ded pur­po­se of the AI system and of the com­pon­ents and func­tions sup­port­ed through this AI system; 5a. A basic and con­cise descrip­ti­on of the infor­ma­ti­on used by the system (data, inputs) and its ope­ra­ting logic; 6. Sta­tus of the AI system (on the mar­ket, or in ser­vice; no lon­ger pla­ced on the market/in ser­vice, recal­led); 7. Type, num­ber and expiry date of the cer­ti­fi­ca­te issued by the noti­fi­ed body and the name or iden­ti­fi­ca­ti­on num­ber of that noti­fi­ed body, when appli­ca­ble; 8. A scan­ned copy of the cer­ti­fi­ca­te refer­red to in point 7, when appli­ca­ble; 9. Mem­ber Sta­tes in which the AI system is or has been pla­ced on the mar­ket, put into ser­vice or made available in the Uni­on; 10. A copy of the EU decla­ra­ti­on of con­for­mi­ty refer­red to in Artic­le 48; 11. Elec­tro­nic ins­truc­tions for use; this infor­ma­ti­on shall not be pro­vi­ded for high-risk AI systems in the are­as of law enforce­ment and migra­ti­on, asyl­um and bor­der con­trol manage­ment refer­red to in Annex III, points 1, 6 and 7. 12. URL for addi­tio­nal infor­ma­ti­on (optio­nal).

SECTION B – Infor­ma­ti­on to be sub­mit­ted by deployers of high-risk AI systems in accordance with Artic­le 51(1b)

The fol­lo­wing infor­ma­ti­on shall be pro­vi­ded and the­re­af­ter kept up to date with regard to high-risk AI systems to be regi­stered in accordance with Artic­le 51: 1. The name, address and cont­act details of the deployer; 2. The name, address and cont­act details of the per­son sub­mit­ting infor­ma­ti­on on behalf of the deployer; 5. A sum­ma­ry of the fin­dings of the fun­da­men­tal rights impact assess­ment con­duc­ted in accordance with Artic­le 29a; 6. The URL of the ent­ry of the AI system in the EU data­ba­se by its pro­vi­der; 7. A sum­ma­ry of the data pro­tec­tion impact assess­ment car­ri­ed out in accordance with Artic­le 35 of Regu­la­ti­on (EU) 2016/679 or Artic­le 27 of Direc­ti­ve (EU) 2016/680 as spe­ci­fi­ed in para­graph 6 of Artic­le 29 of this Regu­la­ti­on, whe­re applicable. 

SECTION C – Infor­ma­ti­on to be sub­mit­ted by pro­vi­ders of high-risk AI systems in accordance with Artic­le 51(1a)

The fol­lo­wing infor­ma­ti­on shall be pro­vi­ded and the­re­af­ter kept up to date with regard to AI systems to be regi­stered in accordance with Artic­le 51(1a). 1. Name, address and cont­act details of the pro­vi­der; 1. Whe­re sub­mis­si­on of infor­ma­ti­on is car­ri­ed out by ano­ther per­son on behalf of the pro­vi­der, the name, address and cont­act details of that per­son; 2. Name, address and cont­act details of the aut­ho­ri­sed repre­sen­ta­ti­ve, whe­re appli­ca­ble; 3. AI system trade name and any addi­tio­nal unam­bi­guous refe­rence allo­wing iden­ti­fi­ca­ti­on and tracea­bi­li­ty of the AI system; 4. Descrip­ti­on of the inten­ded pur­po­se of the AI system; 5. Based on which cri­ter­ion or cri­te­ria pro­vi­ded in Artic­le 6(2a) the AI system is con­side­red as not high-risk; 6. Short sum­ma­ry of the grounds for con­side­ring the AI system as not high-risk in appli­ca­ti­on of the pro­ce­du­re under Artic­le 6(2a); 7. Sta­tus of the AI system (on the mar­ket, or in ser­vice; no lon­ger pla­ced on the market/in ser­vice, recal­led); Mem­ber Sta­tes in which the AI system is or has been pla­ced on the mar­ket, put into ser­vice or made available in the Union. 

ANNEX VII­Ia Infor­ma­ti­on to be sub­mit­ted upon the regi­stra­ti­on of high-risk ai systems listed in annex iii in rela­ti­on to test­ing in real world con­di­ti­ons in accordance with Artic­le 54a

The fol­lo­wing infor­ma­ti­on shall be pro­vi­ded and the­re­af­ter kept up to date with regard to test­ing in real world con­di­ti­ons to be regi­stered in accordance with Artic­le 54a:

1. Uni­on-wide uni­que sin­gle iden­ti­fi­ca­ti­on num­ber of the test­ing in real world con­di­ti­ons; 2. Name and cont­act details of the pro­vi­der or pro­s­pec­ti­ve pro­vi­der and users invol­ved in the test­ing in real world con­di­ti­ons; 3. A brief descrip­ti­on of the AI system, its inten­ded pur­po­se and other infor­ma­ti­on neces­sa­ry for the iden­ti­fi­ca­ti­on of the system; 4. A sum­ma­ry of the main cha­rac­te­ri­stics of the plan for test­ing in real world con­di­ti­ons; 5. Infor­ma­ti­on on the sus­pen­si­on or ter­mi­na­ti­on of the test­ing in real world conditions. 

ANNEX IX Uni­on legis­la­ti­on on lar­ge-sca­le IT systems in the area of Free­dom, Secu­ri­ty and Justice

1. Schen­gen Infor­ma­ti­on System

(a) Regu­la­ti­on (EU) 2018/1860 of the Euro­pean Par­lia­ment and of the Coun­cil of 28 Novem­ber 2018 on the use of the Schen­gen Infor­ma­ti­on System for the return of ille­gal­ly stay­ing third-coun­try natio­nals (OJ L 312, 7.12.2018, p. 1). (b) Regu­la­ti­on (EU) 2018/1861 of the Euro­pean Par­lia­ment and of the Coun­cil of 28 Novem­ber 2018 on the estab­lish­ment, ope­ra­ti­on and use of the Schen­gen Infor­ma­ti­on System (SIS) in the field of bor­der checks, and amen­ding the Con­ven­ti­on imple­men­ting the Schen­gen Agree­ment, and amen­ding and repe­al­ing Regu­la­ti­on (EC) No 1987/2006 (OJ L 312, 7.12.2018, p. 14). (c) Regu­la­ti­on (EU) 2018/1862 of the Euro­pean Par­lia­ment and of the Coun­cil of 28 Novem­ber 2018 on the estab­lish­ment, ope­ra­ti­on and use of the Schen­gen Infor­ma­ti­on System (SIS) in the field of poli­ce coope­ra­ti­on and judi­cial coope­ra­ti­on in cri­mi­nal mat­ters, amen­ding and repe­al­ing Coun­cil Decis­i­on 2007/533/JHA, and repe­al­ing Regu­la­ti­on (EC) No 1986/2006 of the Euro­pean Par­lia­ment and of the Coun­cil and Com­mis­si­on Decis­i­on 2010/261/EU (OJ L 312, 7.12.2018, p. 56). 

2. Visa Infor­ma­ti­on System

(a) Pro­po­sal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL amen­ding Regu­la­ti­on (EC) No 767/2008, Regu­la­ti­on (EC) No 810/2009, Regu­la­ti­on (EU) 2017/2226, Regu­la­ti­on (EU) 2016/399, Regu­la­ti­on XX/2018 [Inter­ope­ra­bi­li­ty Regu­la­ti­on], and Decis­i­on 2004/512/EC and repe­al­ing Coun­cil Decis­i­on 2008/633/JHA – COM(2018) 302 final. To be updated once the Regu­la­ti­on is adopted (April/May 2021) by the co-legislators. 

3. Eurodac

(a) Amen­ded pro­po­sal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on the estab­lish­ment of ‘Euro­dac’ for the com­pa­ri­son of bio­me­tric data for the effec­ti­ve appli­ca­ti­on of Regu­la­ti­on (EU) XXX/XXX [Regu­la­ti­on on Asyl­um and Migra­ti­on Manage­ment] and of Regu­la­ti­on (EU) XXX/XXX [Resett­le­ment Regu­la­ti­on], for iden­ti­fy­ing an ille­gal­ly stay­ing third- coun­try natio­nal or sta­te­l­ess per­son and on requests for the com­pa­ri­son with Euro­dac data by Mem­ber Sta­tes’ law enforce­ment aut­ho­ri­ties and Euro­pol for law enforce­ment pur­po­ses and amen­ding Regu­la­ti­ons (EU) 2018/1240 and (EU) 2019/818 – COM(2020) 614 final. 

4. Entry/Exit System

(a) Regu­la­ti­on (EU) 2017/2226 of the Euro­pean Par­lia­ment and of the Coun­cil of 30 Novem­ber 2017 estab­li­shing an Entry/Exit System (EES) to regi­ster ent­ry and exit data and refu­sal of ent­ry data of third-coun­try natio­nals crossing the exter­nal bor­ders of the Mem­ber Sta­tes and deter­mi­ning the con­di­ti­ons for access to the EES for law enforce­ment pur­po­ses, and amen­ding the Con­ven­ti­on imple­men­ting the Schen­gen Agree­ment and Regu­la­ti­ons (EC) No 767/2008 and (EU) No 1077/2011 (OJ L 327, 9.12.2017, p. 20). 

5. Euro­pean Tra­vel Infor­ma­ti­on and Aut­ho­ri­sa­ti­on System

(a) Regu­la­ti­on (EU) 2018/1240 of the Euro­pean Par­lia­ment and of the Coun­cil of 12 Sep­tem­ber 2018 estab­li­shing a Euro­pean Tra­vel Infor­ma­ti­on and Aut­ho­ri­sa­ti­on System (ETIAS) and amen­ding Regu­la­ti­ons (EU) No 1077/2011, (EU) No 515/2014, (EU) 2016/399, (EU) 2016/1624 and (EU) 2017/2226 (OJ L 236, 19.9.2018, p. 1). (b) Regu­la­ti­on (EU) 2018/1241 of the Euro­pean Par­lia­ment and of the Coun­cil of 12 Sep­tem­ber 2018 amen­ding Regu­la­ti­on (EU) 2016/794 for the pur­po­se of estab­li­shing a Euro­pean Tra­vel Infor­ma­ti­on and Aut­ho­ri­sa­ti­on System (ETIAS) (OJ L 236, 19.9.2018, p. 72). 

6. Euro­pean Cri­mi­nal Records Infor­ma­ti­on System on third-coun­try natio­nals and sta­te­l­ess persons

(a) Regu­la­ti­on (EU) 2019/816 of the Euro­pean Par­lia­ment and of the Coun­cil of 17 April 2019 estab­li­shing a cen­tra­li­sed system for the iden­ti­fi­ca­ti­on of Mem­ber Sta­tes hol­ding con­vic­tion infor­ma­ti­on on third-coun­try natio­nals and sta­te­l­ess per­sons (ECRIS-TCN) to sup­ple­ment the Euro­pean Cri­mi­nal Records Infor­ma­ti­on System and amen­ding Regu­la­ti­on (EU) 2018/1726 (OJ L 135, 22.5.2019, p. 1). 

7. Interoperability

(a) Regu­la­ti­on (EU) 2019/817 of the Euro­pean Par­lia­ment and of the Coun­cil of 20 May 2019 on estab­li­shing a frame­work for inter­ope­ra­bi­li­ty bet­ween EU infor­ma­ti­on systems in the field of bor­ders and visa (OJ L 135, 22.5.2019, p. 27). (b) Regu­la­ti­on (EU) 2019/818 of the Euro­pean Par­lia­ment and of the Coun­cil of 20 May 2019 on estab­li­shing a frame­work for inter­ope­ra­bi­li­ty bet­ween EU infor­ma­ti­on systems in the field of poli­ce and judi­cial coope­ra­ti­on, asyl­um and migra­ti­on (OJ L 135, 22.5.2019, p. 85). 

ANNEX IXa Tech­ni­cal docu­men­ta­ti­on refer­red to in Artic­le 52c(1a): tech­ni­cal docu­men­ta­ti­on for pro­vi­ders of gene­ral pur­po­se AI models:

Sec­tion 1: Infor­ma­ti­on to be pro­vi­ded by all pro­vi­ders of gene­ral-pur­po­se AI models

The tech­ni­cal docu­men­ta­ti­on refer­red to in Artic­le X (b) shall con­tain at least the fol­lo­wing infor­ma­ti­on as appro­pria­te to the size and risk pro­fi­le of the model: 1. A gene­ral descrip­ti­on of the gene­ral pur­po­se AI model inclu­ding: (a) the tasks that the model is inten­ded to per­form and the type and natu­re of AI systems in which it can be inte­gra­ted; (b) accep­ta­ble use poli­ci­es appli­ca­ble; (c) the date of release and methods of dis­tri­bu­ti­on; (d) the archi­tec­tu­re and num­ber of para­me­ters; (e) moda­li­ty (e.g. text, image) and for­mat of inputs and out­puts; (f) the licen­se. 2. A detail­ed descrip­ti­on of the ele­ments of the model refer­red to in para­graph 1, and rele­vant infor­ma­ti­on of the pro­cess for the deve­lo­p­ment, inclu­ding the fol­lo­wing ele­ments: (a) the tech­ni­cal means (e.g. ins­truc­tions of use, infras­truc­tu­re, tools) requi­red for the gene­ral-pur­po­se AI model to be inte­gra­ted in AI systems; (b) the design spe­ci­fi­ca­ti­ons of the model and trai­ning pro­cess, inclu­ding trai­ning metho­do­lo­gies and tech­ni­ques, the key design choices inclu­ding the ratio­na­le and assump­ti­ons made; what the model is desi­gned to opti­mi­se for and the rele­van­ce of the dif­fe­rent para­me­ters, as appli­ca­ble; (c) infor­ma­ti­on on the data used for trai­ning, test­ing and vali­da­ti­on, whe­re appli­ca­ble, inclu­ding type and pro­ven­an­ce of data and cura­ti­on metho­do­lo­gies (e.g. clea­ning, fil­te­ring etc), the num­ber of data points, their scope and main cha­rac­te­ri­stics; how the data was obtai­ned and sel­ec­ted as well as all other mea­su­res to detect the unsui­ta­bi­li­ty of data sources and methods to detect iden­ti­fia­ble bia­ses, whe­re appli­ca­ble; (d) the com­pu­ta­tio­nal resour­ces used to train the model (e.g. num­ber of floa­ting point ope­ra­ti­ons – FLOPs), trai­ning time, and other rele­vant details rela­ted to the trai­ning; (e) known or esti­ma­ted ener­gy con­sump­ti­on of the model; in case not known, this could be based on infor­ma­ti­on about com­pu­ta­tio­nal resour­ces used ; 

Sec­tion 2: Addi­tio­nal infor­ma­ti­on to be pro­vi­ded by pro­vi­ders of gene­ral pur­po­se AI model with syste­mic risk

3. Detail­ed descrip­ti­on of the eva­lua­ti­on stra­te­gies, inclu­ding eva­lua­ti­on results, on the basis of available public eva­lua­ti­on pro­to­cols and tools or other­wi­se of other eva­lua­ti­on metho­do­lo­gies. Eva­lua­ti­on stra­te­gies shall include eva­lua­ti­on cri­te­ria, metrics and the metho­do­lo­gy on the iden­ti­fi­ca­ti­on of limi­ta­ti­ons. 4. Whe­re appli­ca­ble, detail­ed descrip­ti­on of the mea­su­res put in place for the pur­po­se of con­duc­ting inter­nal and/or exter­nal adver­sa­ri­al test­ing (e.g. red team­ing), model adap­t­ati­ons, inclu­ding ali­gnment and fine-tuning. 5. Whe­re appli­ca­ble, detail­ed descrip­ti­on of the system archi­tec­tu­re explai­ning how soft­ware com­pon­ents build or feed into each other and inte­gra­te into the over­all processing. 

ANNEX IXb Trans­pa­ren­cy infor­ma­ti­on refer­red to in Artic­le 52c(1b): tech­ni­cal docu­men­ta­ti­on for pro­vi­ders of gene­ral pur­po­se AI models to down­stream pro­vi­ders that inte­gra­te the model into their AI system

The infor­ma­ti­on refer­red to in Artic­le 52c shall con­tain at least the following:

1. A gene­ral descrip­ti­on of the gene­ral pur­po­se AI model inclu­ding: (a) the tasks that the model is inten­ded to per­form and the type and natu­re of AI systems in which it can be inte­gra­ted; (b) accep­ta­ble use poli­ci­es appli­ca­ble; (c) the date of release and methods of dis­tri­bu­ti­on; (d) how the model inter­acts or can be used to inter­act with hard­ware or soft­ware that is not part of the model its­elf, whe­re appli­ca­ble; (e) the ver­si­ons of rele­vant soft­ware rela­ted to the use of the gene­ral pur­po­se AI model, whe­re appli­ca­ble; (f) archi­tec­tu­re and num­ber of para­me­ters, (g) moda­li­ty (e.g., text, image) and for­mat of inputs and out­puts; (h) the licen­se for the model. 2. A descrip­ti­on of the ele­ments of the model and of the pro­cess for its deve­lo­p­ment, inclu­ding: (a) the tech­ni­cal means (e.g. ins­truc­tions of use, infras­truc­tu­re, tools) requi­red for the gene­ral-pur­po­se AI model to be inte­gra­ted in AI systems; (b) moda­li­ty (e.g., text, image, etc.) and for­mat of the inputs and out­puts and their maxi­mum size (e.g., con­text win­dow length, etc.); (c) infor­ma­ti­on on the data used for trai­ning, test­ing and vali­da­ti­on, whe­re appli­ca­ble, inclu­ding, type and pro­ven­an­ce of data and cura­ti­on methodologies. 

ANNEX IXc Cri­te­ria for the desi­gna­ti­on of gene­ral pur­po­se AI models with syste­mic risk refer­red to in artic­le 52a

For the pur­po­se of deter­mi­ning that a gene­ral pur­po­se AI model has capa­bi­li­ties or impact equi­va­lent to tho­se of points (a) and (b) in Artic­le 52a, the Com­mis­si­on shall take into account the fol­lo­wing criteria:

(a) num­ber of para­me­ters of the model; (b) qua­li­ty or size of the data set, for exam­p­le mea­su­red through tokens; (c) the amount of com­pu­te used for trai­ning the model, mea­su­red in FLOPs or indi­ca­ted by a com­bi­na­ti­on of other varia­bles such as esti­ma­ted cost of trai­ning, esti­ma­ted time requi­red for the trai­ning, or esti­ma­ted ener­gy con­sump­ti­on for the trai­ning; (d) input and out­put moda­li­ties of the model, such as text to text (lar­ge lan­guage models), text to image, mul­ti-moda­li­ty, and the sta­te-of-the-art thres­holds for deter­mi­ning high-impact capa­bi­li­ties for each moda­li­ty, and the spe­ci­fic type of inputs and out­puts (e.g. bio­lo­gi­cal sequen­ces); (e) bench­marks and eva­lua­tions of capa­bi­li­ties of the model, inclu­ding con­side­ring the num­ber of tasks wit­hout addi­tio­nal trai­ning, adap­ta­bi­li­ty to learn new, distinct tasks, its degree of auto­no­my and sca­la­bi­li­ty, the tools it has access to; (f) it has a high impact on the inter­nal mar­ket due to its reach, which shall be pre­su­med when it has been made available to at least 10 000 regi­stered busi­ness users estab­lished in the Uni­on; (g) num­ber of regi­stered end-users.