In of the EU, the legis­la­ti­ve pro­cess for the “Arti­fi­ci­al Intel­li­gence Act” (“AI Act” or “AI Regu­la­ti­on”) is on the home stretch after months of inten­si­ve nego­tia­ti­ons. The two respon­si­ble com­mit­tees (IMCO and LIBE) of the EU Par­lia­ment have voted in the May 11, 2023 vote on the Com­pro­mi­se text rea­ched poli­ti­cal agree­ment on the world’s first set of rules for arti­fi­ci­al intel­li­gence, after preli­mi­na­ry agree­ment was alre­a­dy rea­ched in the two com­mit­tees at the end of April 2023. The cur­rent draft will now be voted on in the ple­num of the EU Par­lia­ment in mid-June (June 14). After that, the tri­lo­gue nego­tia­ti­ons bet­ween Par­lia­ment, Coun­cil and Com­mis­si­on will start.

Initi­al situation

The AI Act is a legis­la­ti­ve pro­ject of the Euro­pean Com­mis­si­on to regu­la­te arti­fi­ci­al intel­li­gence (AI). It was adopted by the April 21, 2021, as the first legis­la­tu­re to sub­mit a com­pre­hen­si­ve pro­po­sal to regu­la­te AI. With the pro­po­sed legis­la­ti­on, the EU is attemp­ting a balan­cing act, becau­se on the one hand, the AI Act is inten­ded to ensu­re that affec­ted indi­vi­du­als do not suf­fer any dis­ad­van­ta­ges from the use of AI systems, and on the other hand, the new regu­la­ti­on is inten­ded to fur­ther pro­mo­te inno­va­ti­on and give as much room as pos­si­ble to the deve­lo­p­ment and use of AI.

The­re had been delays in the legis­la­ti­ve pro­cess sin­ce the end of 2022. The rea­son for this was not only the 3000 amend­ments sub­mit­ted, but also the emer­gence of gene­ra­ti­ve AI (espe­ci­al­ly ChatGPT) and the dis­cus­sion on how the AI Act should deal with it. In the draft AI Act of April 21, 2021, models such as ChatGPT did not yet play a role.

The AI Act is appli­ca­ble to pro­vi­ders and users of AI systems. “Pro­vi­ders” are under­s­tood to be tho­se actors who deve­lop and place a system on the mar­ket, while “users” include tho­se enti­ties that use a system under their respon­si­bi­li­ty, exclu­ding the per­so­nal, non-pro­fes­sio­nal sec­tor. Con­su­mers, end users and other natu­ral or legal per­sons affec­ted by results of the systems are not covered.

Risk-based approach of the regulation

The core ele­ment of the AI Act is a risk-based approach that ent­ails various requi­re­ments and pro­hi­bi­ti­ons based on poten­ti­al capa­bi­li­ties and risks. The hig­her the risk of an AI system to the health, safe­ty or fun­da­men­tal rights of indi­vi­du­als, the more strin­gent the regu­la­to­ry requi­re­ments. The AI Act thus clas­si­fi­es AI appli­ca­ti­ons into dif­fe­rent risk cate­go­ries with dif­fe­rent consequences:

  • Unac­cep­ta­ble risk(e.g. social scoring) – the use of cor­re­spon­ding AI systems is prohibited
  • High risk (e.g., AI systems to be used for bio­me­tric iden­ti­fi­ca­ti­on of natu­ral per­sons or for assess­ment of exams).
  • Limi­t­ed risk or no risk(e.g. spam filter)

Most important changes

The main chan­ges com­pared to the Commission’s April 21, 2021 draft are as follows:

  • Defi­ni­ti­on AI systems
  • High-risk AI systems: addi­tio­nal lay­er for clas­si­fi­ca­ti­on in high-risk cate­go­ries and more exten­si­ve obli­ga­ti­ons for cor­re­spon­ding systems
  • Pro­hi­bi­ted AI systems: exten­ded list
  • Stric­ter rules for so-cal­led Foun­da­ti­on Models and Gene­ral Pur­po­se AI
  • Estab­lish­ment of an AI office
  • Six AI principles


Defi­ni­ti­on AI systems

A major point of dis­cus­sion con­cer­ned the defi­ni­ti­on of AI or “AI systems”. Busi­ness and sci­ence cri­ti­ci­zed in par­ti­cu­lar the insuf­fi­ci­ent defi­ni­tio­nal cla­ri­ty of AI systems, becau­se the first defi­ni­ti­on in the draft of April 2021 could be applied to almost all forms of soft­ware. For this rea­son, the respon­si­ble mem­bers of par­lia­ment agreed on a new defi­ni­ti­on, which was ali­gned with the future defi­ni­ti­on of the OECD:

Art. 3(1): “Arti­fi­ci­al intel­li­gence system” (AI system) means a machi­ne-based system desi­gned to work with various Levels of auto­no­my ope­ra­tes and that can gene­ra­te results such as pre­dic­tions, recom­men­da­ti­ons, or decis­i­ons that affect phy­si­cal or vir­tu­al envi­ron­ments for expli­cit or impli­cit pur­po­ses.

Thus, for an AI system to fall within the scope of the AI Act, the system must be gran­ted a degree of auto­no­my. This expres­ses a cer­tain inde­pen­dence from the human ope­ra­tor or from human influence.

High-risk AI systems

One area that has been con­ten­tious both within the two par­lia­men­ta­ry com­mit­tees and will likely lead to dis­cus­sion in the tri­lo­gue nego­tia­ti­ons is the exten­si­ve list of high-risk appli­ca­ti­ons (Annex III of the regu­la­ti­on). The ori­gi­nal draft always clas­si­fi­ed AI systems that fall under the cri­ti­cal use cases listed in Annex III as high-risk. Par­lia­men­ta­ri­ans now added an addi­tio­nal requi­re­ment: a high-risk AI system should only exist if it is also a signi­fi­cant risk (“signi­fi­cant risk”) for health, safe­ty or fun­da­men­tal rights ent­ails. A signi­fi­cant risk is a risk which, due to the com­bi­na­ti­on of its seve­ri­ty, inten­si­ty, pro­ba­bi­li­ty of occur­rence and dura­ti­on of its effects, is sub­stan­ti­al and may affect an indi­vi­du­al, a lar­ge num­ber of indi­vi­du­als or a spe­ci­fic group of indi­vi­du­als (cf. Art. 1b).

If AI systems fall under Annex III but pro­vi­ders belie­ve the­re is no signi­fi­cant risk, they must noti­fy the com­pe­tent aut­ho­ri­ty, which has three months to object. In the mean­ti­me, pro­vi­ders can place their system on the mar­ket – but if the assess­ment is incor­rect, the pro­vi­der can be sanctioned.

AI systems used to mana­ge cri­ti­cal infras­truc­tures such as ener­gy net­works or water manage­ment systems are now also clas­si­fi­ed as high risk, pro­vi­ded that the­se appli­ca­ti­ons can lead to serious envi­ron­men­tal risks. Recom­men­da­ti­on systems from “very lar­ge online plat­forms” (more than 45 mil­li­on users), as defi­ned by the Digi­tal Ser­vices Act (DSA), are also con­side­red high risk. Fur­ther­mo­re, addi­tio­nal safe­guards (e.g., docu­men­ta­ti­on requi­re­ments) have been inclu­ded for the pro­cess by which pro­vi­ders of high-risk AI systems can pro­cess sen­si­ti­ve data, such as sexu­al ori­en­ta­ti­on or reli­gious beliefs, to detect nega­ti­ve bias. AI systems that fall into the high-risk cate­go­ry must then record their envi­ron­men­tal foot­print accor­ding to the latest draft.

Exten­si­ve obli­ga­ti­ons are impo­sed on pro­vi­ders and users of high-risk AI systems, e.g., com­pli­ance assess­ment, risk manage­ment systems, tech­ni­cal docu­men­ta­ti­on, record-kee­ping requi­re­ments, trans­pa­ren­cy and pro­vi­si­on of infor­ma­ti­on to users, human over­sight, accu­ra­cy, robust­ness and cyber­se­cu­ri­ty, qua­li­ty manage­ment systems, report­ing of serious inci­dents and mal­func­tions, etc. Also, spe­ci­fi­ed qua­li­ty cri­te­ria for trai­ning, vali­da­ti­on, and test data sets must be met.

Pro­hi­bi­ted practices

A poli­ti­cal­ly sen­si­ti­ve dis­cus­sion cen­te­red on what type of AI systems should be ban­ned becau­se they pose an unac­cep­ta­ble risk. Nevert­hel­ess, this cate­go­ry was expan­ded: The use of bio­me­tric iden­ti­fi­ca­ti­on soft­ware would now be ban­ned altog­e­ther. Accor­ding to the com­pro­mi­se text, cor­re­spon­ding reco­gni­ti­on soft­ware may only be used for serious cri­mes and with pri­or judi­cial aut­ho­rizati­on. The use of AI-based emo­ti­on reco­gni­ti­on soft­ware in law enforce­ment, bor­der manage­ment, the work­place and edu­ca­ti­on would also be banned.

Next, “inten­tio­nal­ly mani­pu­la­ti­ve or decep­ti­ve tech­ni­ques” are new­ly pro­hi­bi­ted (alt­hough pro­ving intent could be dif­fi­cult). This pro­hi­bi­ti­on does not app­ly to AI systems to be used for appro­ved the­ra­peu­tic pur­po­ses based on infor­med and expli­cit con­sent. Inci­den­tal­ly, the MEP ban on “pre­dic­ti­ve poli­cing” has also been exten­ded from fel­o­nies to misdemeanors.

Gene­ral Pur­po­se AI” and “Foun­da­ti­on models”.

Preli­mi­na­ry remarks:

  • Machi­ne Lear­ning (ML) is a sub­field of AI. 
  • Gene­ral Pur­po­se AI (GPAI; Ger­man: gene­ra­ti­ve AI) is again a sub­field of ML that can gene­ra­te new con­tent such as text, images, video, code, etc. as a result of a prompt. 
  • Foun­da­ti­on Models (FMs; Ger­man: Basis­mo­del­le). This is a deep lear­ning appli­ca­ti­on, usual­ly trai­ned on a wide ran­ge of data sources and lar­ge data sets to per­form a wide ran­ge of tasks, inclu­ding tho­se for which they were not spe­ci­fi­cal­ly deve­lo­ped and trai­ned. FMs are a vari­ant of GPAI.
  • A Lar­ge Lan­guage Model (LLM) is a sub-vari­ant of FMs. LLM is a lan­guage model that emu­la­tes a neu­ral network.
  • GPT is a series of LLMs from Ope­nAI that has been under deve­lo­p­ment sin­ce 2018. The latest ver­si­on is GPT‑4.

The draft AI Act of April 21, 2021, lacked refe­ren­ces to AI systems wit­hout a spe­ci­fic pur­po­se (Gene­ral Pur­po­se AI). This chan­ges with the cur­rent com­pro­mi­se text. The rise of ChatGPT and other gene­ra­ti­ve AI systems has led par­lia­men­ta­ri­ans to also con­sider “Gene­ral Pur­po­se AI systems(GPAI) andFoun­da­ti­on Models” to want to regulate.

Initi­al­ly, calls for a ban or per­ma­nent clas­si­fi­ca­ti­on of ChatGPT and simi­lar AI systems in the high-risk cate­go­ry were dis­cus­sed. Howe­ver, the cur­rent com­pro­mi­se text does not clas­si­fy GPAI as high-risk per se. It is only when ven­dors inte­gra­te GPAI into their AI systems that are con­side­red high-risk that the strict requi­re­ments of the high-risk cate­go­ry also app­ly to GPAI. In this case, GPAI pro­vi­ders must assist down­stream pro­vi­ders in com­ply­ing by pro­vi­ding infor­ma­ti­on and docu­men­ta­ti­on about the AI model.

Stric­ter requi­re­ments are also pro­po­sed for Foun­da­ti­on Models. The­se rela­te, for exam­p­le, to risk manage­ment, qua­li­ty manage­ment, data manage­ment, secu­ri­ty and cyber­se­cu­ri­ty, and the degree of robust­ness of a foun­da­ti­on model. Art. 28b of the com­pro­mi­se text regu­la­tes the obli­ga­ti­ons of the pro­vi­ders of a Foun­da­ti­on Model regard­less of whe­ther it is pro­vi­ded as a stan­da­lo­ne model or embedded in an AI system or pro­duct, under free and open source licen­ses, as a ser­vice or through other dis­tri­bu­ti­on chan­nels. In addi­ti­on to a num­ber of detail­ed trans­pa­ren­cy obli­ga­ti­ons (refe­rence to Art. 52; e.g., dis­clo­sure to natu­ral per­sons that they inter­act with an AI system), pro­vi­ders of Foun­da­ti­on Models should also be requi­red to pro­vi­de a “suf­fi­ci­ent­ly detail­ed” sum­ma­ry of the use of copy­right-pro­tec­ted trai­ning data (Art. 28b(4)(c)). It is not clear how this is to be imple­men­ted for com­pa­nies such as Ope­nAI, becau­se ChatGPT, for exam­p­le, was trai­ned on a data­set of over 570 GB of text data.

New AI principles

Final­ly, the com­pro­mi­se text with Art. 4a con­ta­ins so-cal­led “Gene­ral Prin­ci­ples appli­ca­ble to all AI systems”. All actors cover­ed by the AI Act should deve­lop and deploy AI systems and foun­da­ti­on models in accordance with the fol­lo­wing six “AI principles”:

  • Human action and con­trol: AI systems should ser­ve humans and respect human dignity and per­so­nal auto­no­my, and func­tion in such a way that they can be con­trol­led and moni­to­red by humans.
  • Tech­ni­cal robust­ness and safe­ty: Unin­ten­ded and unex­pec­ted dama­ge should be mini­mi­zed, and AI systems should be robust in the event of unin­ten­ded problems.
  • Data pro­tec­tion and data gover­nan­ce: AI systems should be deve­lo­ped and deployed in com­pli­ance with data pro­tec­tion regulations.
  • Trans­pa­ren­cyTracea­bi­li­ty and explaina­bi­li­ty must be pos­si­ble, and peo­p­le must be made awa­re that they are inter­ac­ting with an AI system.
  • Diver­si­ty, non-dis­cri­mi­na­ti­on and fair­ness: AI systems should enga­ge diver­se stake­hol­ders and pro­mo­te equal access, gen­der equa­li­ty, and cul­tu­ral diver­si­ty, and con­ver­se­ly avo­id dis­cri­mi­na­to­ry effects.
  • Social and envi­ron­men­tal well-being: AI systems should be sus­tainable, envi­ron­men­tal­ly fri­end­ly, and deve­lo­ped and used for the bene­fit of all people.

Estab­lish­ment of a Euro­pean AI Office

The­re was agree­ment in both par­lia­men­ta­ry com­mit­tees that the enforce­ment archi­tec­tu­re should include a cen­tral ele­ment, par­ti­cu­lar­ly to sup­port the har­mo­ni­zed appli­ca­ti­on of the AI Act and for cross-bor­der inve­sti­ga­ti­ons. For this rea­son, the estab­lish­ment of an AI Office was pro­po­sed. In the new com­pro­mi­se text (Art. 56 ff.), the tasks of this office are explai­ned in detail.

Sanctions

Vio­la­ti­ons of the AI Act can result in seve­re fines, simi­lar to the DSGVO. Vio­la­ti­ons of pro­hi­bi­ti­ons or high-risk systems data gover­nan­ce requi­re­ments are sub­ject to fines of up to EUR 30 mil­li­on or 6% of glo­bal annu­al reve­nue, whi­che­ver is greater.

Inter­na­tio­nal scope: Impact on Switzerland

Swiss pro­vi­ders who place AI systems on the mar­ket or put them into ope­ra­ti­on in the EU are also cover­ed by the ter­ri­to­ri­al scope of the AI Act. Next, the AI Act applies to Swiss pro­vi­ders and users of AI systems if the result pro­du­ced by the AI system is used in the EU.

Then the­re will pro­ba­b­ly also be the so-cal­led “Brussels effect” in Switz­er­land. Many Swiss AI pro­vi­ders will deve­lop their pro­ducts not only for Switz­er­land, which means that the new Euro­pean stan­dards of the AI Act are likely to beco­me estab­lished in Switz­er­land as well.

Fur­ther pro­ce­du­re and ent­ry into force

The­re could well be sur­pri­ses in the par­lia­men­ta­ry ple­na­ry vote in mid-June; howe­ver, the parliament’s posi­ti­on is lar­ge­ly con­so­li­da­ted. Once the Par­lia­ment has for­mal­ly adopted its posi­ti­on, the draft will enter the final pha­se of the legis­la­ti­ve pro­cess: the so-cal­led tri­lo­gue nego­tia­ti­ons, in which repre­sen­ta­ti­ves of the EU Coun­cil, the EU Par­lia­ment and the EU Com­mis­si­on agree on a final text. Howe­ver, the AI Act is not expec­ted to be pas­sed befo­re the end of 2023 and will thus in force in mid-2024 at the ear­liest enter into force. The­re will then be a two-year imple­men­ta­ti­on peri­od. Howe­ver, the pro­vi­si­ons on noti­fy­ing aut­ho­ri­ties and bodies as well as the pro­vi­si­ons on the Euro­pean Com­mit­tee for Arti­fi­ci­al Intel­li­gence and the com­pe­tent natio­nal aut­ho­ri­ties are to take full effect as ear­ly as three months after ent­ry into force. Also Art. 71 (Sanc­tions) is alre­a­dy appli­ca­ble 12 months after ent­ry into force.

Even if it will take some time befo­re the regu­la­ti­on will be rele­vant for (Swiss) com­pa­nies, they should fami­lia­ri­ze them­sel­ves with the cur­rent draft.