noyb: Com­plaint against Ope­nAI (infor­ma­ti­on and correction)

Authority(ies): 

Branch(es): 

Laws:

Content 

noyb, Max Schrems’ orga­nizati­on, filed a com­plaint with the Austri­an data pro­tec­tion aut­ho­ri­ty on 29 April 2024. Com­plaint against Ope­nAI submitted.

The com­plaint con­cerns a user of ChatGPT, appar­ent­ly a public figu­re, who had noti­ced that ChatGPT was a Incor­rect date of birth had spe­ci­fi­ed. When asked by the user in que­sti­on, Ope­nAI replied that it was not pos­si­ble to pre­vent the system from giving the wrong ans­wer. Alt­hough Fil­ter pre­vent the dis­clo­sure of per­so­nal data, but the date of birth can­not be fil­te­red out wit­hout other data also being affec­ted. Blocking all user data would then vio­la­te the right to free­dom of expres­si­on and information.

On the merits, noyb argues that the Access right was vio­la­ted becau­se Ope­nAI did not pro­vi­de any infor­ma­ti­on on per­so­nal data in the model (only on the user account), alt­hough per­so­nal data was obvious­ly included.

Fur­ther­mo­re, the Report­ing right vio­la­ted. The right to express an opi­ni­on does not cover the pro­ce­s­sing of fal­se per­so­nal data, and a tech­ni­cal impos­si­bi­li­ty of cor­rec­tion is no justification:

In the pre­sent case, howe­ver, the respon­dent can­not invo­ke its free­dom of expres­si­on. In the area of data pro­tec­tion, untrue state­ments of fact do not fall within the scope of pro­tec­tion of free­dom of expres­si­on. Fur­ther­mo­re, the (fal­se) date of birth of the com­plainant would not con­tri­bu­te anything to a deba­te of public inte­rest. The respon­dent was also unable to cite any legal pro­vi­si­on under Artic­le 85 GDPR that would allow a depar­tu­re from the prin­ci­ple of accu­ra­cy in favor of the complainant’s right to express an opi­ni­on – which is not appli­ca­ble here.

It must be stres­sed that the clai­med tech­ni­cal­ly impos­si­bi­li­ty to era­se or rec­ti­fy the data subject’s date of birth wit­hout blocking other rele­vant pie­ces of infor­ma­ti­on is by no means a valid justi­fi­ca­ti­on to dero­ga­te to the prin­ci­ple of accu­ra­cy under Artic­le 5(1)(d) GDPR. The fact that a soft­ware deve­lo­ped by a con­trol­ler is unable to com­ply with the law makes the pro­ce­s­sing sim­ply unlawful – but never the law inapplicable.

noyb is the­r­e­fo­re deman­ding an inve­sti­ga­ti­on and reme­di­al mea­su­res and is pro­po­sing a fine. Ope­nAI in the USA is also the respon­si­ble par­ty – the­re is at least a shared respon­si­bi­li­ty with the Irish Ope­nAI com­pa­ny, which was pro­ba­b­ly only set up pro forma.

Howe­ver, an inac­cu­ra­te state­ment by an LLM is not neces­s­a­ri­ly “fal­se” – not if the reci­pi­ent must be awa­re that state­ments are not­hing but Sta­tis­ti­cal pro­ba­bi­li­ties are fal­se. Such appro­xi­ma­te state­ments are only fal­se if they are pre­sen­ted as fact, i.e. if the legi­ti­ma­te under­stan­ding of the public does not cor­re­spond to rea­li­ty. One may also ask whe­ther a right to infor­ma­ti­on should not be exclu­ded becau­se per­so­nal data in the model is, so to speak, available in the “brain” – in any case, a per­son would not have to pro­vi­de infor­ma­ti­on about their thoughts.

The fact that an LLM can­not be chan­ged in a tar­ge­ted man­ner, but only over the cour­se of trai­ning, is inde­ed remi­nis­cent of a brain. If the Model the­r­e­fo­re like a per­son which of cour­se it is not de lege lata, but which would not be com­ple­te­ly absurd as a broad ana­lo­gy – the­re would be no right to infor­ma­ti­on in terms of data pro­tec­tion law, as men­tio­ned abo­ve, becau­se “know­ledge in your head” is not to be pro­vi­ded with infor­ma­ti­on (cf. here). Such an ana­lo­gy would not neces­s­a­ri­ly be wrong from a copy­right per­spec­ti­ve, or at least it could sol­ve some dif­fi­cul­ties. The trai­ning would be the enjoy­ment of the work, which is free and not depen­dent on a limi­ta­ti­on rule, and the out­put could in turn be sub­ject to copy­right rights, which the “cli­ent”, i.e. the owner of the model, would be entit­led to in accordance with exi­sting regu­la­ti­ons. In terms of tax law, the “per­son AI model” would pro­ba­b­ly have to be loca­li­zed at the mar­ket loca­ti­on or at the reci­pi­ent of the ser­vice in order to estab­lish a per­ma­nent estab­lish­ment or a reci­pi­ent prin­ci­ple as in the case of VAT. Not that an AI would be human-like (the oppo­si­te is more likely to be the case), but cer­tain ana­lo­gies sug­gest themselves.