• Home 
  • -
  • AI 
  • -
  • Moti­on Glätt­li (24.3796): Trans­pa­rent risk-based impact assess­ments for the use of AI and algo­rith­ms by the fede­ral government 

Moti­on Glätt­li (24.3796): Trans­pa­rent risk-based impact assess­ments for the use of AI and algo­rith­ms by the fede­ral government

Moti­on Glätt­li (24.3796): Trans­pa­rent risk-based impact assess­ments for the use of AI and algo­rith­ms by the fede­ral government

Sub­mit­ted text

The Fede­ral Coun­cil is ins­truc­ted to Fede­ral Admi­ni­stra­ti­on to com­mit them­sel­ves to using algo­rith­mic and AI-based systems in their Impact assess­ment on fun­da­men­tal rights and socie­tal risks and their Trans­pa­rent results to make. This obli­ga­ti­on should app­ly to all systems that are used to make or sup­port decis­i­ons in the fede­ral administration.

Justi­fi­ca­ti­on

Systems based on arti­fi­ci­al intel­li­gence (AI) or algo­rith­ms can crea­te fore­casts, make recom­men­da­ti­ons, take decis­i­ons and gene­ra­te con­tent. Public admi­ni­stra­ti­ons in Switz­er­land are also incre­a­sing­ly test­ing or using such systems. The­se systems offer oppor­tu­ni­ties for grea­ter effi­ci­en­cy in admi­ni­stra­ti­on, but also pose risks in terms of fun­da­men­tal rights or demo­cra­tic pro­ce­s­ses and the rule of law.

In par­ti­cu­lar, they can Dis­cri­mi­na­to­ry effects on cer­tain popu­la­ti­on groups bring with them. In the Net­her­lands, thou­sands of fami­lies faced exi­sten­ti­al hard­ship when a dis­cri­mi­na­to­ry algo­rithm wron­gly asked them to pay back sta­te child­ca­re bene­fits. In Austria, an algo­rithm cal­cu­la­ted the pro­ba­bi­li­ty of unem­ployed peo­p­le being reinte­gra­ted into the labor mar­ket. It award­ed minus points if unem­ployed peo­p­le had a duty of care – but only if they were women.

The­se risks depend on the con­text, pur­po­se and man­ner of use. The aut­ho­ri­ties must the­r­e­fo­re Syste­ma­ti­cal­ly check risks. A first stage of risk assess­ment makes it pos­si­ble to tria­ge low-risk and high-risk appli­ca­ti­ons quick­ly and easi­ly. Only if risk signals beco­me appa­rent at this tria­ge stage is a more com­pre­hen­si­ve impact assess­ment to be car­ri­ed out. The results of this impact assess­ment are trans­pa­rent and to make them acce­s­si­ble in a direc­to­ry for this purpose.
The need for such an impact assess­ment has alre­a­dy been reco­gnized in the AI Con­ven­ti­on of the Coun­cil of Euro­pe has been estab­lished. Switz­er­land will the­r­e­fo­re have to intro­du­ce such a mecha­nism in order to rati­fy the Con­ven­ti­on. The Ordi­nan­ce on the AI Act of the Euro­pean Uni­on pro­vi­des for a fun­da­men­tal rights impact assess­ment of the use of high-risk AI systems by the public admi­ni­stra­ti­on, which must be made transparent.