Motion Glättli (24.3795): Protection against discrimination in the use of AI and algorithms
Submitted text
The Federal Council is instructed to create or adapt the legal provisions in order to provide adequate protection against discrimination through partially or fully automated decision-making processes.
Justification
One of the biggest risks of fully and partially automated decision-making processes is recognized to be discrimination. This has relevant effects, for example, in the allocation of housing, the calculation of insurance premiums and creditworthiness or in the processing of job applications.
Unfortunately, however, the General protection against discrimination of Article 8 para. 2 BV (which, in conjunction with Art. 35 para. 3 BV, should also apply between private individuals) today not specified by law. This is to be changed. In the case of (partially) automated decision-making procedures, it must be taken into account in particular that discrimination can occur not only directly, but also indirectly (via proxy), and that a large number of people can be affected due to the scaling effect. Depending on the risk of the application, it is therefore also necessary Appropriate transparency and due diligence obligations including impact assessments.
Finally, special consideration must be given to law enforcement. This must not fail because it is very difficult or technically impossible to provide individual evidence, especially in the case of AI applications without a transparent and clear decision-making mechanism (black box problem). The Federal Council is already conducting an interdepartmental review of AI regulation with its mandate of 22.11.2023. The protection against discrimination called for here can be integrated into subsequent legislative procedures if necessary and, if possible and appropriate, also coordinated with international regulations.