• Home 
  • -
  • AI 
  • -
  • Moti­on Glätt­li (24.3795): Pro­tec­tion against dis­cri­mi­na­ti­on in the use of AI and algorithms 

Moti­on Glätt­li (24.3795): Pro­tec­tion against dis­cri­mi­na­ti­on in the use of AI and algorithms

Moti­on Glätt­li (24.3795): Pro­tec­tion against dis­cri­mi­na­ti­on in the use of AI and algorithms

Sub­mit­ted text

The Fede­ral Coun­cil is ins­truc­ted to crea­te or adapt the legal pro­vi­si­ons in order to pro­vi­de ade­qua­te pro­tec­tion against dis­cri­mi­na­ti­on through par­ti­al­ly or ful­ly auto­ma­ted decis­i­on-making processes.

Justi­fi­ca­ti­on

One of the big­gest risks of ful­ly and par­ti­al­ly auto­ma­ted decis­i­on-making pro­ce­s­ses is reco­gnized to be dis­cri­mi­na­ti­on. This has rele­vant effects, for exam­p­le, in the allo­ca­ti­on of housing, the cal­cu­la­ti­on of insu­rance pre­mi­ums and cre­dit­wort­hi­ness or in the pro­ce­s­sing of job applications.

Unfort­u­n­a­te­ly, howe­ver, the Gene­ral pro­tec­tion against dis­cri­mi­na­ti­on of Artic­le 8 para. 2 BV (which, in con­junc­tion with Art. 35 para. 3 BV, should also app­ly bet­ween pri­va­te indi­vi­du­als) today not spe­ci­fi­ed by law. This is to be chan­ged. In the case of (par­ti­al­ly) auto­ma­ted decis­i­on-making pro­ce­du­res, it must be taken into account in par­ti­cu­lar that dis­cri­mi­na­ti­on can occur not only direct­ly, but also indi­rect­ly (via pro­xy), and that a lar­ge num­ber of peo­p­le can be affec­ted due to the sca­ling effect. Depen­ding on the risk of the appli­ca­ti­on, it is the­r­e­fo­re also neces­sa­ry Appro­pria­te trans­pa­ren­cy and due dili­gence obli­ga­ti­ons inclu­ding impact assess­ments.

Final­ly, spe­cial con­side­ra­ti­on must be given to law enforce­ment. This must not fail becau­se it is very dif­fi­cult or tech­ni­cal­ly impos­si­ble to pro­vi­de indi­vi­du­al evi­dence, espe­ci­al­ly in the case of AI appli­ca­ti­ons wit­hout a trans­pa­rent and clear decis­i­on-making mecha­nism (black box pro­blem). The Fede­ral Coun­cil is alre­a­dy con­duc­ting an inter­de­part­ment­al review of AI regu­la­ti­on with its man­da­te of 22.11.2023. The pro­tec­tion against dis­cri­mi­na­ti­on cal­led for here can be inte­gra­ted into sub­se­quent legis­la­ti­ve pro­ce­du­res if neces­sa­ry and, if pos­si­ble and appro­pria­te, also coor­di­na­ted with inter­na­tio­nal regulations.