• Home 
  • -
  • AI 
  • -
  • Moti­on Mar­ti (23.3806): Decla­ra­ti­on obli­ga­ti­on for arti­fi­ci­al intel­li­gence appli­ca­ti­ons and auto­ma­ted decis­i­on-making systems 

Moti­on Mar­ti (23.3806): Decla­ra­ti­on obli­ga­ti­on for arti­fi­ci­al intel­li­gence appli­ca­ti­ons and auto­ma­ted decis­i­on-making systems

Moti­on Mar­ti (23.3806): Decla­ra­ti­on obli­ga­ti­on for appli­ca­ti­ons of arti­fi­ci­al intel­li­gence and auto­ma­ted decis­i­on-making systems

Sub­mit­ted text

The Fede­ral Coun­cil is ins­truc­ted to crea­te the legal basis for a decla­ra­ti­on obli­ga­ti­on for arti­fi­ci­al intel­li­gence appli­ca­ti­ons and auto­ma­ted decis­i­on-making systems. This decla­ra­ti­on obli­ga­ti­on should app­ly to appli­ca­ti­ons in both the public and pri­va­te sec­tors. This ensu­res uni­form stan­dards and thus crea­tes trust for both citi­zens and com­pa­nies that use AI technologies.

Justi­fi­ca­ti­on

Algo­rith­mic systems and arti­fi­ci­al intel­li­gence are alre­a­dy being used by sta­te and pri­va­te actors. Howe­ver, it is not always clear to all tho­se affec­ted that the­se are being used. Trans­pa­ren­cy and tracea­bi­li­ty are among the mea­su­res recom­men­ded by both experts and repre­sen­ta­ti­ves of the indu­stries con­cer­ned. The decla­ra­ti­on obli­ga­ti­on is inten­ded to ensu­re that all appli­ca­ti­ons and systems based on arti­fi­ci­al intel­li­gence that are used in Switz­er­land are obli­ged to dis­c­lo­se their AI com­pon­ents. This inclu­des infor­ma­ti­on about the algo­rith­ms used, the data sources, the trai­ning data and the eva­lua­ti­on methods. Such dis­clo­sure makes it pos­si­ble to bet­ter under­stand the func­tio­ning and poten­ti­al of AI appli­ca­ti­ons and to iden­ti­fy pos­si­ble pre­ju­di­ces, dis­cri­mi­na­ti­on or unde­si­ra­ble effects. The intro­duc­tion of a decla­ra­ti­on obli­ga­ti­on for arti­fi­ci­al intel­li­gence streng­thens Switz­er­land as a pio­neer in the field of ethi­cal AI and posi­ti­ons us as a respon­si­ble coun­try that takes the pro­tec­tion of pri­va­cy, the fight against dis­cri­mi­na­ti­on and the pro­mo­ti­on of trans­pa­ren­cy in tech­no­lo­gy deve­lo­p­ment seriously.

Opi­ni­on of the Fede­ral Coun­cil of 30.8.2023

The issue of crea­ting trans­pa­ren­cy in the use of AI systems is curr­ent­ly being dis­cus­sed both as part of the EU’s work on the “AI Act” and in the nego­tia­ti­ons for a bin­ding agree­ment on AI in the Coun­cil of Euro­pe under the Swiss Pre­si­den­cy. Trans­pa­ren­cy and tracea­bi­li­ty as basic prin­ci­ples are important com­pon­ents of both sets of rules. At the same time, both sets of regu­la­ti­ons are based on a risk approach that pro­vi­des for cor­re­spon­din­gly dif­fe­rent regu­la­to­ry requi­re­ments for appli­ca­ti­ons with dif­fe­rent risks. Depen­ding on the con­text, a gra­dua­ted and dif­fe­ren­tia­ted regu­la­to­ry approach should be applied. The EU curr­ent­ly does not pro­vi­de for a decla­ra­ti­on obli­ga­ti­on for appli­ca­ti­ons that are only assi­gned mini­mal risks. How exact­ly the Coun­cil of Euro­pe for­mu­la­tes such a gra­dua­ted approach is curr­ent­ly still the sub­ject of nego­tia­ti­ons. Howe­ver, it can also be assu­med that agree­ment will be rea­ched on a decla­ra­ti­on obli­ga­ti­on for appli­ca­ti­ons abo­ve a cer­tain risk level (but not for all AI applications).

When the new Data Pro­tec­tion Act (nDSG) comes into force on Sep­tem­ber 1, 2023, an infor­ma­ti­on obli­ga­ti­on will alre­a­dy app­ly in Switz­er­land for decis­i­ons that are based exclu­si­ve­ly on auto­ma­ted data pro­ce­s­sing (Art. 21 nDSG). This pro­vi­si­on covers decis­i­ons that are of a cer­tain com­ple­xi­ty and that have legal con­se­quen­ces for the data sub­ject or signi­fi­cant­ly affect them. The pro­vi­si­on applies to both the public and pri­va­te sec­tors. Accor­ding to Artic­le 25 para­graph 2 let­ter f nDSG, the per­son respon­si­ble for the pro­ce­s­sing is also obli­ged to inform the data sub­ject of the logic on which the auto­ma­ted indi­vi­du­al decis­i­on is based as part of the right of access. In addi­ti­on, the data sub­ject must also be infor­med of the amount and type of infor­ma­ti­on used and its weight­ing. Based on this data, the data sub­ject should be able to under­stand the decis­i­on and, if neces­sa­ry, con­test it.

In his respon­se to the Postu­la­te Dobler (23.3201) the Fede­ral Coun­cil has alre­a­dy announ­ced that it will draw up a poli­ti­cal over­view with pos­si­ble opti­ons for sec­to­ral and, whe­re neces­sa­ry, hori­zon­tal regu­la­to­ry mea­su­res in the area of AI by the end of 2024. The rele­vant ana­ly­ses will be car­ri­ed out within the exi­sting bodies (in par­ti­cu­lar the Tri­par­ti­te Plat­form and its Admi­ni­stra­ti­ve Com­mit­tee, the inter­de­part­ment­al EU Digi­tal Poli­cy Coor­di­na­ti­on Group, the AI Gui­de­line Moni­to­ring and the AI Com­pe­tence Net­work (CNAI)) and with the invol­vement of all fede­ral agen­ci­es con­cer­ned. The inter­pre­ta­ti­ve ana­ly­sis will also address the que­sti­on of the ext­ent to which a regu­la­ti­on regar­ding the obli­ga­ti­on to decla­re AI systems is appro­pria­te in Switz­er­land that goes bey­ond the regu­la­ti­ons alre­a­dy pro­vi­ded for in the nDSG.