The FDPIC has published a Communication – perhaps on the occasion of the Executive Order from President Biden – pointed out that data protection law is applicable to AI-supported processing of personal data. He has already commented on the topic in Aprilnow a little more detailed.
Information from the FDPIC
The FDPIC would first like to point out that the Federal Administration in Switzerland is evaluating various approaches for the regulation of AI – probably until the end of 2024 (see also the Response to the Dobler postulate). However, given the pace of developments in the field of AI, the FDPIC is of course right to point out that data protection law also applies to the processing of personal data if the processing is supported by AI. Data protection law could therefore apply to manufacturers, providers and users of such applications.
The FDPIC emphasizes above all the concern for transparency (as with the previous communication). Let there be
- the purpose
- the mode of operation and
- the data sources
to make the processing based on AI transparent. This claim is “closely linked” to the right to “object to automated data processing” and to the rights of data subjects in the case of automated individual decisions.
In the case of intelligent language models that communicate with those affected, the latter should have the right to know,
whether they speak or correspond with a machine and whether the data they enter is processed to improve the self-learning programs or for other purposes.
This also applies to the use of programs that enable deep fakes (the “falsification of faces, images or voice messages”) of identifiable persons,
must always be clearly recognizable, unless it proves to be completely unlawful in the specific case due to prohibitions under criminal law.
A DPIA may also be required. Certain applications are then prohibited, namely if they
are aimed at undermining the privacy and informational self-determination protected by the FADP […]. This refers in particular to AI-based data processing, which can be observed in authoritarian states, such as comprehensive facial recognition in real time or the comprehensive observation and evaluation of lifestyle, so-called “social scoring”.
Notes
The FDPIC refers at the outset to the aforementioned Executive Order by President Biden, but is certain on the matter from the AI Act inspired. In Art. 52 para. 1, the AI Act requires a Basic transparency:
Providers shall ensure that AI-systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are dealing with an AI-system, unless this is obvious due to the circumstances and context of use. This requirement does not apply to AI systems authorized by law for the detection, prevention, investigation and prosecution of criminal offences, unless these systems are available to the public to report a criminal offence.
At Deep Fakes must also be informed (Art. 532 para. 3):
Users of an AI system that generates or manipulates image, sound or video content that noticeably resembles real persons, objects, places or other facilities or events and would falsely appear to a person to be real or truthful (“deepfake”) must disclose that the content was artificially generated or manipulated.
Art. 5 then prohibits certain practicesfor subliminal influence, for example, when exploiting particular weaknesses, as well as certain forms of discrimination or biometric real-time remote identification systems.
This is interesting because no superior regulatory approach has yet emerged from the general perplexity about how to deal with AI applications. However, the risk-based approach of the AI Act has some merit. The desire to hold providers, importers, retailers and users of AI applications to account and regulate their risk management is quite obvious. There is also the opinion that we should wait and see how things develop and address problems once they have manifested themselves, but this seems a little naïve – not all problems are reversible, especially not the biggest ones. Increased liability in the area of AI is postulated and is plausible, but also only leads to a shift in assets. In any case, it seems as if the FDPIC would like a Adoption of the principles of the AI Act anticipate. Also because the AI Act has a certain extraterritorial applicability (see here), such a takeover is certainly within the realm of probability.
The FDPIC cannot remain passive in the face of the dramatically rapid developments. However, he has no other instrument than data protection (especially after the Federal Administrative Court ruled in the Helsana decision has rejected the significant consideration of concerns outside of data protection, such as consumer protection). It is therefore obvious that he postulates certain minimum principles and justifies them in terms of data protection law.
However, this does not mean that these principles are clearly justified de lege lata. The concern for transparency, for example, is mentioned in Art. 6 para. 2 and 3 FADP and is of course a basic principle. However, this principle does not actually require information about the modalities of data processing, unless it is particularly risky. This may or may not be the case with AI applications. The law also does not require information about specific types of processing when it comes to the duty to provide information. Such a transparency obligation can therefore only be derived from the principle of good faith, which can justify almost anything if it does not immediately become – systematically questionable – the standard for interpreting the principle of transparency.
Nevertheless, it is also from Reputational reasons It makes sense to declare AI applications if they process personal data. This certainly applies to chatbots and similar applications. However, this does not have to be the case for machine learning applications. When contracts are automatically categorized with ML, email notification systems automatically determine the time at which emails are sent, translation programs are trained, etc., personal data is often processed. However, it – i.e. the processing by ML or AI – is incidental here. In contrast to applications that are specifically aimed at processing personal data with AI applications, transparency is much less of an issue here.
With this distinction or this reservation, the FDPIC’s advice can certainly be followed. However, the level of detail to be provided about the functionality of an AI application remains open. Simply stating that an AI is used for purpose X should often suffice. If a service provider also uses personal data for training purposes for the benefit of other customers, i.e. not only as a processor, a corresponding reference, e.g. in a privacy policy, also makes sense. For transparency, see also p. 10 of the “Use of generative AI – Guide to data protection law” of the VUD.