AI Regu­la­ti­on: draft; com­pro­mi­se pro­po­sal of the Slove­ni­an Coun­cil Presidency

Draft regu­la­ti­on

In April 2021, the Euro­pean Com­mis­si­on had published a draft regu­la­ti­on estab­li­shing har­mo­ni­zed rules for arti­fi­ci­al intel­li­gence (AI Regu­la­ti­on). The draft is curr­ent­ly being dis­cus­sed in the Euro­pean Parliament.

Scope

The regu­la­ti­on regu­la­tes AI systems. On Novem­ber 29, 2021, the Slove­ni­an pre­si­den­cy adopted a Com­pro­mi­se text and, among other things, sup­ple­men­ted the sub­ject mat­ter and scope of appli­ca­ti­on and adju­sted defi­ni­ti­ons. In par­ti­cu­lar, the term “AI system” was chan­ged. It is not decisi­ve whe­ther a system func­tions auto­no­mously or as a com­po­nent of a pro­duct. An AI system is now defi­ned as:

arti­fi­ci­al intel­li­gence system’ (AI system) means a system that

(i) recei­ves machi­ne and/or human-based data and inputs,

(ii) infers how to achie­ve a given set of human-defi­ned objec­ti­ves using lear­ning, rea­so­ning or model­ling imple­men­ted with the tech­ni­ques and approa­ches listed in Annex I, and

(iii) gene­ra­tes out­puts in the form of con­tent (gene­ra­ti­ve AI systems), pre­dic­tions, recom­men­da­ti­ons or decis­i­ons, which influence the envi­ron­ments it inter­acts with;

Annex I con­ta­ins a List of detec­ted AI tech­no­lo­gy:

(a) Machi­ne lear­ning approa­ches, inclu­ding super­vi­sed, unsu­per­vi­sed and rein­force­ment lear­ning, using a wide varie­ty of methods inclu­ding deep learning;

(b) Logic- and know­ledge-based approa­chesinclu­ding know­ledge repre­sen­ta­ti­on, induc­ti­ve (logic) pro­gramming, know­ledge bases, infe­rence and deduc­ti­ve engi­nes, (sym­bo­lic) rea­so­ning and expert systems;

(c) Sta­tis­ti­cal approa­ches, Bayesi­an esti­ma­ti­on, search and opti­mizati­on methods.

In spa­ti­al aspect the AI Regu­la­ti­on applies not only to users and pro­vi­ders estab­lished in the EU, but also to the pla­cing on the mar­ket in the EU of AI systems or pro­ducts using an AI system, and to pro­vi­ders and users of AI systems to the ext­ent that the results gene­ra­ted by tho­se systems are used in the EU. Reci­tals 10 and 11 pro­vi­de fur­ther details in this regard.

Pro­hi­bi­ted prac­ti­ces (black­list)

The regu­la­ti­on pro­vi­des for a Pro­hi­bit cer­tain par­ti­cu­lar­ly ris­ky or ethi­cal­ly que­stionable uses of arti­fi­ci­al intel­li­gence befo­re (“pro­hi­bi­ted prac­ti­ces”), e.g. their use

  • to an influence that acts unconsciously,
  • con­cerns per­sons who are par­ti­cu­lar­ly vulnerable,
  • can be used to clas­si­fy the trust­wort­hi­ness of indi­vi­du­als in con­nec­tion with social affi­lia­ti­on or social beha­vi­or (social scoring).

To this ext­ent, not only is the use of AI systems pro­hi­bi­ted, but alre­a­dy the pla­cing of cor­re­spon­ding systems on the mar­ket. This is ano­ther exam­p­le of the Apron pro­tec­tion in data lawwhich can also be obser­ved else­whe­re (e.g., in the obli­ga­ti­on to con­duct impact assess­ments, in the prin­ci­ple of pri­va­cy by design, and in the fic­tion of a vio­la­ti­on of pri­va­cy even in the case of negli­gi­ble vio­la­ti­ons of pro­ce­s­sing principles).

High-risk systems

The regu­la­ti­on also requi­res, under cer­tain con­di­ti­ons, a Clas­si­fi­ca­ti­on of AI systems as high-risk systems. (accor­ding to Annex III of the Regu­la­ti­on), which leads to spe­cial requi­re­ments for the system its­elf. In this con­text, systems are clas­si­fi­ed as high-risk on the one hand due to the tech­no­lo­gy used and on the other hand due to their use in cer­tain sec­tors. For exam­p­le, AI systems are con­side­red high-risk systems if they are used in the fol­lo­wing areas:

  • con­sent-less bio­me­tric iden­ti­fi­ca­ti­on (one can ima­gi­ne here the importance that the effec­ti­ve­ness of cons­ents to bio­me­tric iden­ti­fi­ca­ti­ons takes on when they are based on AI);
  • safe­ty com­po­nent in traf­fic con­trol, ener­gy sup­p­ly, digi­tal infras­truc­tu­re or emis­si­on control;
  • Access test­ing for exams;
  • Appli­ca­ti­on exams;
  • Con­trol access to cer­tain services;
  • Law enforce­ment.

Such systems must under­go con­for­mi­ty assess­ment pro­ce­du­res befo­re they can be pla­ced on the mar­ket in the EU. Simi­lar­ly, hig­her requi­re­ments app­ly to docu­men­ta­ti­on and infor­ma­ti­on to users, and “human over­sight” must be ensu­red – ano­ther exam­p­le of a man­da­to­ry human ele­ment in auto­ma­ted systems, remi­nis­cent of data sub­ject rights in auto­ma­ted indi­vi­du­al decis­i­ons. All mem­bers of the value chain – sup­pliers, importers, dis­tri­bu­tors, and users – of high-risk systems also have spe­ci­fic obli­ga­ti­ons, inclu­ding mar­ket moni­to­ring requirements.

Aut­ho­ri­ties and sanctions

In addi­ti­on, aut­ho­ri­ties are created:

  • Mem­ber Sta­tes must estab­lish com­pe­tent natio­nal aut­ho­ri­ties or desi­gna­te exi­sting authorities;
  • a Euro­pean Com­mit­tee for Arti­fi­ci­al Intel­li­gence crea­ted to advi­se the Com­mis­si­on of the EU.

Com­pli­ance is ensu­red by Sanc­tions, which in extre­me cases can go up to 6% of sales or EUR 30 million.

Aut­ho­ri­ty

Area

Topics

Rela­ted articles

Sub­scri­be