Take-Aways (AI)
  • Report calls for cla­ri­fi­ca­ti­on of regu­la­to­ry requi­re­ments for auto­ma­ted decis­i­on-making systems (ADMS) and AI, in par­ti­cu­lar trans­pa­ren­cy, ethics, avo­id­ance of discrimination.
  • The­re is a lack of cla­ri­ty regar­ding respon­si­bi­li­ty and lia­bi­li­ty for fore­casts, recom­men­da­ti­ons and decis­i­ons made by ADMS.
  • Fede­ral Coun­cil does not curr­ent­ly see any need for a gene­ral new legal frame­work; sel­ec­ti­ve adjust­ments and exi­sting data pro­tec­tion pro­vi­si­ons are usual­ly sufficient.
  • Fede­ral Coun­cil refers to ongo­ing natio­nal and inter­na­tio­nal initia­ti­ves (CNAI, plat­forms, EU moni­to­ring) and rejects the postulate.

Postu­la­te Mar­ti (21.4406): Report on the regu­la­ti­on of auto­ma­ted decis­i­on-making systems

Sub­mit­ted text

The Fede­ral Coun­cil is ins­truc­ted to sub­mit a report indi­ca­ting whe­re Pos­si­ble need for regu­la­ti­on of auto­ma­ted decis­i­on-making systems (ADMS) or arti­fi­ci­al intel­li­gence is pre­sent. The focus here is on ensu­ring trans­pa­ren­cy, obser­ving ethi­cal gui­de­lines and avo­i­ding dis­cri­mi­na­ti­on or mani­pu­la­ti­on. Ano­ther aspect con­cerns que­sti­ons of attri­bu­ti­on of respon­si­bi­li­ty and lia­bi­li­ty, whe­re the­re is a need for legal cla­ri­fi­ca­ti­on in con­nec­tion with fore­casts, recom­men­da­ti­ons or decis­i­ons made by ADMS. The aim is to cla­ri­fy whe­ther the legal basis and instru­ments are suf­fi­ci­ent to coun­ter the­se risks. In this con­text, the Crea­ti­on of a natio­nal ethics com­mis­si­on be exami­ned. Fur­ther­mo­re, the report should show whe­re the­se systems are alre­a­dy in use in the public sec­tor come (e.g., law enforce­ment) and whe­re, at best, the legal foun­da­ti­ons are lacking.

Justi­fi­ca­ti­on

Auto­ma­ted decis­i­on-making systems or arti­fi­ci­al intel­li­gence are being used in more and more are­as, both in the pri­va­te sec­tor and in the public sec­tor. This tech­no­lo­gy is also asso­cia­ted with cer­tain risks. On the one hand, cer­tain examp­les show that decis­i­ons based on bia­ses in models or trai­ning data or due to social or eco­no­mic cir­cum­stances can be dis­cri­mi­na­to­ry can be (e.g. Ama­zon and appli­ca­ti­on system) or wrong or dan­ge­rous can be (IBM Watson’s Dia­gno­stic Tools for Onco­lo­gy). Like­wi­se, they can be asso­cia­ted with seve­re Rest­ric­tions on fun­da­men­tal rights or a deter­rent effect on the exer­cise of fun­da­men­tal rights, such as in the area of pre­ven­ti­ve poli­ce work. Even systems that the EU Com­mis­si­on con­siders to be lower-risk systems can be high­ly pro­ble­ma­tic, such as so-cal­led deep fakes.

In both the pri­va­te and public sec­tors, it is often dif­fi­cult for the public and affec­ted par­ties to often not reco­gnizablewhe­re ADMS are used and when they are affec­ted. On the other hand, it often remains unclear, for what pur­po­se and by whom a system is used and how it works. This report is inten­ded to reme­dy this situa­ti­on. In addi­ti­on, it is to be shown how the­se Trans­pa­ren­cy can be impro­ved both vis-à-vis tho­se affec­ted and the gene­ral public (for exam­p­le, through dis­clo­sure and infor­ma­ti­on obli­ga­ti­ons vis-à-vis tho­se affec­ted or through the crea­ti­on of a public regi­ster with basic infor­ma­ti­on on the ADMS used in the public sec­tor). The need for action was also high­ligh­ted in the Posi­ti­on Paper “A Legal Frame­work for Arti­fi­ci­al Intel­li­gence” by the Digi­tal Socie­ty Initia­ti­ve of the Uni­ver­si­ty of Zurich held.

State­ment of the Fede­ral Coun­cil from 16.02.2022

Arti­fi­ci­al intel­li­gence (AI) rai­ses num­e­rous que­sti­ons for inno­va­ti­ve and for­ward-loo­king busi­ness loca­ti­ons that act accor­ding to the prin­ci­ples of the rule of law. The Fede­ral Coun­cil alre­a­dy addres­sed the­se in 2019 at a Loca­ti­on deter­mi­na­ti­on with this topic when the inter­de­part­ment­al working group for AI published a cor­re­spon­ding report. Fur­ther he adopted in 2020 Gui­de­lines for deal­ing with AI in the fede­ral admi­ni­stra­ti­on and com­mis­sio­ned the Depart­ment of the Envi­ron­ment, Trans­port, Ener­gy and Com­mu­ni­ca­ti­ons (DETEC) to regu­lar­ly review its appli­ca­ti­on and fur­ther deve­lop it in col­la­bo­ra­ti­on with the offices con­cer­ned. In 2021, the Fede­ral Depart­ment of Home Affairs (FDHA) recei­ved the man­da­te to deve­lop, by spring 2022, a Com­pe­tence Net­work for AI (CNAI) set up. Con­se­quent­ly, the needs and oppor­tu­ni­ties ana­ly­sis lar­ge­ly over­laps with the posi­ti­on paper of the Digi­tal Socie­ty Initia­ti­ve of the Uni­ver­si­ty of Zurich men­tio­ned in the postulate.

Con­cer­ning the legal pro­vi­si­ons, the Fede­ral Coun­cil has come to the con­clu­si­on on the basis of the afo­re­men­tio­ned report that No new gene­ral legal frame­work nee­ded at pre­sent Is. AI can rai­se que­sti­ons in various are­as (e.g. medi­cal dia­gno­sis, agri­cul­tu­re, courts), but the­se are cover­ed by the appli­ca­ble legal bases and can usual­ly be ans­we­red well. If this is not the case, sel­ec­ti­ve solu­ti­ons must be found. In cer­tain are­as, this can lead to the revi­si­on of a law or an ordi­nan­ce. Here, refe­rence should be made to the pro­ce­du­re descri­bed in the Year 2020 adopted new data pro­tec­tion act which alre­a­dy pro­vi­des for seve­ral pro­vi­si­ons impro­ving trans­pa­ren­cy in this area, both in the pri­va­te and the public sec­tor. Artic­le 21 intro­du­ces an obli­ga­ti­on to pro­vi­de infor­ma­ti­on for auto­ma­ted indi­vi­du­al decis­i­ons, wher­eby the data sub­ject may request that the decis­i­on be review­ed by a natu­ral per­son. Indi­vi­du­als asser­ting the right to infor­ma­ti­on shall be infor­med of the exi­stence of an auto­ma­ted indi­vi­du­al decis­i­on and the logic on which the decis­i­on is based (Art. 25(2)(f)). With regard to the pro­ce­s­sing of per­so­nal data by fede­ral bodies, a basis in a law in the for­mal sen­se is requi­red if the man­ner in which the data is pro­ce­s­sed (this inclu­des the use of algo­rith­ms) may lead to a serious inter­fe­rence with the fun­da­men­tal rights of the data sub­ject (Art. 34 para. 2 let. c). The Fede­ral Coun­cil has ins­truc­ted the Fede­ral Depart­ment of Finan­ce (FDF) to do this, By the end of 2022, an ana­ly­sis of the legal frame­work for the appli­ca­ti­on of AI in the finan­cial sec­tor to crea­te. The CNAI streng­thens trans­pa­ren­cy and trust in the­se tech­no­lo­gies thanks to its struc­tu­red data­ba­se for public admi­ni­stra­ti­on. Fur­ther­mo­re Switz­er­land actively par­ti­ci­pa­tes in the work of the OECD, the Coun­cil of Euro­pe, UNESCO and the Inter­na­tio­nal Tele­com­mu­ni­ca­ti­on Uni­on (ITU) to deve­lop an inter­na­tio­nal regu­la­to­ry frame­work in the field of AI. The Con­fe­de­ra­ti­on is clo­se­ly fol­lo­wing the dis­cus­sions regar­ding the new legal frame­work for AI in the EU and ana­ly­zing the pos­si­ble con­se­quen­ces for Switz­er­land. To this end, DETEC has sub­mit­ted to the Fede­ral Coun­cil 2021 shown in a report, as AI-based inter­me­dia­ries and com­mu­ni­ca­ti­on plat­forms on public com­mu­ni­ca­ti­on in Switz­er­land and the for­ma­ti­on of opi­ni­on among the Swiss popu­la­ti­on. impact. DETEC is also to inform the Fede­ral Coun­cil by the end of 2022 of the ext­ent to which the Com­mu­ni­ca­ti­on plat­forms regu­la­ted would have to be made. The Fede­ral Coun­cil has ins­truc­ted the Fede­ral Depart­ment of For­eign Affairs (FDFA) to deve­lop a Report on the work on the inter­na­tio­nal regu­la­to­ry frame­work in the field of AI. and the pos­si­bi­li­ties for Switz­er­land to actively par­ti­ci­pa­te. On the need for a natio­nal ethics com­mit­tee in May 2021, the Fede­ral Coun­cil has alre­a­dy pro­vi­ded the fol­lo­wing infor­ma­ti­on in its respon­se to the inter­pel­la­ti­on 21.3239 Schlat­ter com­men­ted. Once again, refe­rence should be made to the “Pla­te­for­me Tri­par­ti­te Sui­s­se”, which ser­ves as an exch­an­ge plat­form for AI issues open to all stake­hol­ders and has an admi­ni­stra­ti­ve com­mit­tee to coor­di­na­te Swiss posi­ti­ons in inter­na­tio­nal orga­nizati­ons and pro­ce­s­ses. In addi­ti­on, the Fede­ral Coun­cil will, within the frame­work of the Digi­tal Switz­er­land” stra­tegy as well as the Digi­tal For­eign Poli­cy Stra­tegy 2021 – 2024 prepa­re a fur­ther report on this topic. Taking into account the work under­way at natio­nal and inter­na­tio­nal level for the inter­na­tio­nal regu­la­to­ry frame­work in the field of AI, a par­lia­men­ta­ry man­da­te does not appear neces­sa­ry at present.

The Fede­ral Coun­cil pro­po­ses that the postu­la­te be rejected.