Submitted text
The Federal Council is instructed to Federal Administration to commit themselves to using algorithmic and AI-based systems in their Impact assessment on fundamental rights and societal risks and their Transparent results to make. This obligation should apply to all systems that are used to make or support decisions in the federal administration.
Justification
Systems based on artificial intelligence (AI) or algorithms can create forecasts, make recommendations, take decisions and generate content. Public administrations in Switzerland are also increasingly testing or using such systems. These systems offer opportunities for greater efficiency in administration, but also pose risks in terms of fundamental rights or democratic processes and the rule of law.
In particular, they can Discriminatory effects on certain population groups bring with them. In the Netherlands, thousands of families faced existential hardship when a discriminatory algorithm wrongly asked them to pay back state childcare benefits. In Austria, an algorithm calculated the probability of unemployed people being reintegrated into the labor market. It awarded minus points if unemployed people had a duty of care – but only if they were women.
These risks depend on the context, purpose and manner of use. The authorities must therefore Systematically check risks. A first stage of risk assessment makes it possible to triage low-risk and high-risk applications quickly and easily. Only if risk signals become apparent at this triage stage is a more comprehensive impact assessment to be carried out. The results of this impact assessment are transparent and to make them accessible in a directory for this purpose.
The need for such an impact assessment has already been recognized in the AI Convention of the Council of Europe has been established. Switzerland will therefore have to introduce such a mechanism in order to ratify the Convention. The Ordinance on the AI Act of the European Union provides for a fundamental rights impact assessment of the use of high-risk AI systems by the public administration, which must be made transparent.