The FDPIC has published a Com­mu­ni­ca­ti­on – per­haps on the occa­si­on of the Exe­cu­ti­ve Order from Pre­si­dent Biden – poin­ted out that data pro­tec­tion law is appli­ca­ble to AI-sup­port­ed pro­ce­s­sing of per­so­nal data. He has alre­a­dy com­men­ted on the topic in Aprilnow a litt­le more detailed.

Infor­ma­ti­on from the FDPIC

The FDPIC would first like to point out that the Fede­ral Admi­ni­stra­ti­on in Switz­er­land is eva­lua­ting various approa­ches for the regu­la­ti­on of AI – pro­ba­b­ly until the end of 2024 (see also the Respon­se to the Dobler postu­la­te). Howe­ver, given the pace of deve­lo­p­ments in the field of AI, the FDPIC is of cour­se right to point out that data pro­tec­tion law also applies to the pro­ce­s­sing of per­so­nal data if the pro­ce­s­sing is sup­port­ed by AI. Data pro­tec­tion law could the­r­e­fo­re app­ly to manu­fac­tu­r­ers, pro­vi­ders and users of such applications.

The FDPIC empha­si­zes abo­ve all the con­cern for trans­pa­ren­cy (as with the pre­vious com­mu­ni­ca­ti­on). Let the­re be

  • the pur­po­se
  • the mode of ope­ra­ti­on and
  • the data sources

to make the pro­ce­s­sing based on AI trans­pa­rent. This cla­im is “clo­se­ly lin­ked” to the right to “object to auto­ma­ted data pro­ce­s­sing” and to the rights of data sub­jects in the case of auto­ma­ted indi­vi­du­al decisions.

In the case of intel­li­gent lan­guage models that com­mu­ni­ca­te with tho­se affec­ted, the lat­ter should have the right to know,

whe­ther they speak or cor­re­spond with a machi­ne and whe­ther the data they enter is pro­ce­s­sed to impro­ve the self-lear­ning pro­grams or for other purposes.

This also applies to the use of pro­grams that enable deep fakes (the “fal­si­fi­ca­ti­on of faces, images or voice mes­sa­ges”) of iden­ti­fia­ble persons,

must always be cle­ar­ly reco­gnizable, unless it pro­ves to be com­ple­te­ly unlawful in the spe­ci­fic case due to pro­hi­bi­ti­ons under cri­mi­nal law.

A DPIA may also be requi­red. Cer­tain appli­ca­ti­ons are then pro­hi­bi­ted, name­ly if they

are aimed at under­mi­ning the pri­va­cy and infor­ma­tio­nal self-deter­mi­na­ti­on pro­tec­ted by the FADP […]. This refers in par­ti­cu­lar to AI-based data pro­ce­s­sing, which can be obser­ved in aut­ho­ri­ta­ri­an sta­tes, such as com­pre­hen­si­ve facial reco­gni­ti­on in real time or the com­pre­hen­si­ve obser­va­ti­on and eva­lua­ti­on of life­style, so-cal­led “social scoring”.

Notes

The FDPIC refers at the out­set to the afo­re­men­tio­ned Exe­cu­ti­ve Order by Pre­si­dent Biden, but is cer­tain on the mat­ter from the AI Act inspi­red. In Art. 52 para. 1, the AI Act requi­res a Basic trans­pa­ren­cy:

Pro­vi­ders shall ensu­re that AI-systems inten­ded to inter­act with natu­ral per­sons are desi­gned and deve­lo­ped in such a way that natu­ral per­sons are infor­med that they are deal­ing with an AI-system, unless this is obvious due to the cir­cum­stances and con­text of use. This requi­re­ment does not app­ly to AI systems aut­ho­ri­zed by law for the detec­tion, pre­ven­ti­on, inve­sti­ga­ti­on and pro­se­cu­ti­on of cri­mi­nal offen­ces, unless the­se systems are available to the public to report a cri­mi­nal offence.

At Deep Fakes must also be infor­med (Art. 532 para. 3):

Users of an AI system that gene­ra­tes or mani­pu­la­tes image, sound or video con­tent that noti­ce­ab­ly resem­bles real per­sons, objects, places or other faci­li­ties or events and would fal­se­ly appear to a per­son to be real or truthful (“deepf­ake”) must dis­c­lo­se that the con­tent was arti­fi­ci­al­ly gene­ra­ted or manipulated.

Art. 5 then pro­hi­bits cer­tain prac­ti­cesfor sub­li­mi­nal influence, for exam­p­le, when exploi­ting par­ti­cu­lar weak­ne­s­ses, as well as cer­tain forms of dis­cri­mi­na­ti­on or bio­me­tric real-time remo­te iden­ti­fi­ca­ti­on systems.

This is inte­re­st­ing becau­se no supe­ri­or regu­la­to­ry approach has yet emer­ged from the gene­ral per­ple­xi­ty about how to deal with AI appli­ca­ti­ons. Howe­ver, the risk-based approach of the AI Act has some merit. The desi­re to hold pro­vi­ders, importers, retail­ers and users of AI appli­ca­ti­ons to account and regu­la­te their risk manage­ment is quite obvious. The­re is also the opi­ni­on that we should wait and see how things deve­lop and address pro­blems once they have mani­fe­sted them­sel­ves, but this seems a litt­le naï­ve – not all pro­blems are rever­si­ble, espe­ci­al­ly not the big­gest ones. Increa­sed lia­bi­li­ty in the area of AI is postu­la­ted and is plau­si­ble, but also only leads to a shift in assets. In any case, it seems as if the FDPIC would like a Adop­ti­on of the prin­ci­ples of the AI Act anti­ci­pa­te. Also becau­se the AI Act has a cer­tain extra­ter­ri­to­ri­al appli­ca­bi­li­ty (see here), such a take­over is cer­tain­ly within the realm of probability.

The FDPIC can­not remain pas­si­ve in the face of the dra­ma­ti­cal­ly rapid deve­lo­p­ments. Howe­ver, he has no other instru­ment than data pro­tec­tion (espe­ci­al­ly after the Fede­ral Admi­ni­stra­ti­ve Court ruled in the Hels­a­na decis­i­on has rejec­ted the signi­fi­cant con­side­ra­ti­on of con­cerns out­side of data pro­tec­tion, such as con­su­mer pro­tec­tion). It is the­r­e­fo­re obvious that he postu­la­tes cer­tain mini­mum prin­ci­ples and justi­fi­es them in terms of data pro­tec­tion law.

Howe­ver, this does not mean that the­se prin­ci­ples are cle­ar­ly justi­fi­ed de lege lata. The con­cern for trans­pa­ren­cy, for exam­p­le, is men­tio­ned in Art. 6 para. 2 and 3 FADP and is of cour­se a basic prin­ci­ple. Howe­ver, this prin­ci­ple does not actual­ly requi­re infor­ma­ti­on about the moda­li­ties of data pro­ce­s­sing, unless it is par­ti­cu­lar­ly ris­ky. This may or may not be the case with AI appli­ca­ti­ons. The law also does not requi­re infor­ma­ti­on about spe­ci­fic types of pro­ce­s­sing when it comes to the duty to pro­vi­de infor­ma­ti­on. Such a trans­pa­ren­cy obli­ga­ti­on can the­r­e­fo­re only be deri­ved from the prin­ci­ple of good faith, which can justi­fy almost anything if it does not imme­dia­te­ly beco­me – syste­ma­ti­cal­ly que­stionable – the stan­dard for inter­pre­ting the prin­ci­ple of transparency.

Nevert­hel­ess, it is also from Repu­ta­tio­nal rea­sons It makes sen­se to decla­re AI appli­ca­ti­ons if they pro­cess per­so­nal data. This cer­tain­ly applies to chat­bots and simi­lar appli­ca­ti­ons. Howe­ver, this does not have to be the case for machi­ne lear­ning appli­ca­ti­ons. When con­tracts are auto­ma­ti­cal­ly cate­go­ri­zed with ML, email noti­fi­ca­ti­on systems auto­ma­ti­cal­ly deter­mi­ne the time at which emails are sent, trans­la­ti­on pro­grams are trai­ned, etc., per­so­nal data is often pro­ce­s­sed. Howe­ver, it – i.e. the pro­ce­s­sing by ML or AI – is inci­den­tal here. In con­trast to appli­ca­ti­ons that are spe­ci­fi­cal­ly aimed at pro­ce­s­sing per­so­nal data with AI appli­ca­ti­ons, trans­pa­ren­cy is much less of an issue here.

With this distinc­tion or this reser­va­ti­on, the FDPIC’s advice can cer­tain­ly be fol­lo­wed. Howe­ver, the level of detail to be pro­vi­ded about the func­tion­a­li­ty of an AI appli­ca­ti­on remains open. Sim­ply sta­ting that an AI is used for pur­po­se X should often suf­fice. If a ser­vice pro­vi­der also uses per­so­nal data for trai­ning pur­po­ses for the bene­fit of other cus­to­mers, i.e. not only as a pro­ces­sor, a cor­re­spon­ding refe­rence, e.g. in a pri­va­cy poli­cy, also makes sen­se. For trans­pa­ren­cy, see also p. 10 of the “Use of gene­ra­ti­ve AI – Gui­de to data pro­tec­tion law” of the VUD.