Take-Aways (AI)
  • EU insti­tu­ti­ons rea­ched an agree­ment on the AI Act, inclu­ding the inclu­si­on of foun­da­ti­on models and a tie­red regime with trans­pa­ren­cy obli­ga­ti­ons and man­da­to­ry labe­l­ing of AI content.
  • Stron­ger regu­la­ti­on for “syste­mic” foun­da­ti­on models, strict pro­hi­bi­ti­ons (e.g. bio­me­tric cate­go­rizati­on, emo­ti­on reco­gni­ti­on, social scoring) and sanc­tions of up to 7% turnover.

After seve­ral days of deba­te, repre­sen­ta­ti­ves of the Euro­pean Par­lia­ment and the Coun­cil final­ly rea­ched an agree­ment on Fri­day. Agree­ment on the AI Act achie­ved (Media release). The AI Act has thus taken a decisi­ve step clo­ser to being adopted. Howe­ver, it still has to be for­mal­ly adopted in votes in Par­lia­ment and the Council.

On Thurs­day, a pro­vi­sio­nal com­pro­mi­se was rea­ched on the Inclu­si­on of Foun­da­ti­on Models has been found, a piè­ce de rési­stance is under dis­cus­sion (see here). The defi­ni­ti­on of the systems cover­ed by the AI Act is now appar­ent­ly based on tho­se of the OECD which, fol­lo­wing an amend­ment on Novem­ber 8, 2023, no lon­ger requi­res AI systems to pur­sue objec­ti­ves set by humans, but now reads as follows

An AI system is a machi­ne-based system that, for expli­cit or impli­cit objec­ti­ves, infers, from the input it recei­ves, how to gene­ra­te out­puts such as pre­dic­tions, con­tent, recom­men­da­ti­ons, or decis­i­ons that can influence phy­si­cal or vir­tu­al envi­ron­ments. Dif­fe­rent AI systems vary in their levels of auto­no­my and adap­ti­ve­ness after deployment.

This defi­ni­ti­on inclu­des foun­da­ti­on models. Foun­da­ti­on models are so named becau­se the­se machi­ne lear­ning models are sui­ta­ble for a wide ran­ge of appli­ca­ti­ons due to their trai­ning with exten­si­ve data (Spain pro­po­sed the fol­lo­wing defi­ni­ti­on in the dis­cus­sion about the AI Act: an “AI model that is capa­ble to com­pe­tent­ly per­form a wide ran­ge of distinc­ti­ve tasks”).

Becau­se foun­da­ti­on models are not limi­t­ed to a spe­ci­fic appli­ca­ti­on, they are poor­ly cover­ed by the Commission’s AI Act pro­po­sal. In the agree­ment that has now been rea­ched obvious­ly one pro­po­sed by Spain tie­red approach adopted. Trans­pa­ren­cy obli­ga­ti­ons app­ly to all models (e.g. with regard to trai­ning), AI-gene­ra­ted con­tent must be reco­gnizable as such and copy­right must – of cour­se – be observed.

Cer­tain foun­da­ti­on models are refer­red to as “syste­mic” – are more hea­vi­ly regu­la­ted, i.e. tho­se that pose a syste­mic risk. The­se include models that have been trai­ned with par­ti­cu­lar­ly high com­pu­ting power. For exam­p­le, they must be sub­ject to eva­lua­ti­on and test­ing (Red Team), system risks must be asses­sed and miti­ga­ted, the Com­mis­si­on must be infor­med of serious inci­dents, cyber secu­ri­ty must be ensu­red and their ener­gy effi­ci­en­cy must be reported.

A com­pro­mi­se was also rea­ched on how to deal with Open source models has been achie­ved. Free and open source systems should only be cover­ed by the AI Act if they are a pro­hi­bi­ted prac­ti­ce, fall into the cate­go­ry of a high-risk system or are sui­ta­ble for manipulation.

On Fri­day, the deba­te appeared to focus in par­ti­cu­lar on how to deal with bio­me­tric reco­gni­ti­on systems in public spaces and the que­sti­on of whe­ther public aut­ho­ri­ties should be allo­wed to use bio­me­tric systems to cate­go­ri­ze peo­p­le accor­ding to cri­te­ria such as gen­der, race, reli­gi­on, etc., to reco­gnize emo­ti­ons or for poli­ce work (“pre­dic­ti­ve poli­cing”). Some Mem­ber Sta­tes con­sider the use of such prac­ti­ces to be appro­pria­te for secu­ri­ty pur­po­ses, e.g. France for the 2024 Olym­pic Games.

With the High-risk systems Among other things, a Fun­da­men­tal Rights Impact Assess­ment is to beco­me man­da­to­ry. Also with regard to the pro­hi­bi­ted prac­ti­ces an agree­ment has been rea­ched. The fol­lo­wing prac­ti­ces are to be prohibited:

  • bio­me­tric cate­go­rizati­on systems that use par­ti­cu­lar­ly sen­si­ti­ve data or infor­ma­ti­on (e.g. poli­ti­cal, reli­gious or phi­lo­so­phi­cal beliefs, sexu­al ori­en­ta­ti­on, race);
  • unt­ar­ge­ted rea­ding of facial images from the Inter­net or video sur­veil­lan­ce systems to crea­te cor­re­spon­ding databases;
  • Emo­ti­on reco­gni­ti­on in the work­place and in edu­ca­tio­nal institutions;
  • Social scoring based on social beha­vi­or or per­so­nal characteristics;
  • AI systems that mani­pu­la­te people;
  • AI systems that exploit weak­ne­s­ses (due to age, disa­bi­li­ty or social or eco­no­mic situation).

Depen­ding on the type of vio­la­ti­on and the size of the com­pa­ny, the upper limits for the sanc­tions were set at EUR 35M or 7% of glo­bal tur­no­ver or EUR 7.5M or 1.5 % of turnover.

An office for arti­fi­ci­al intel­li­gence is also to be set up within the EU Com­mis­si­on (the AI Office). The com­pe­tent natio­nal aut­ho­ri­ties will also meet here to ensu­re uni­form appli­ca­ti­on of the AI Act.