noyb, Max Schrems’ organization, filed a complaint with the Austrian data protection authority on 29 April 2024. Complaint against OpenAI submitted.
The complaint concerns a user of ChatGPT, apparently a public figure, who had noticed that ChatGPT was a Incorrect date of birth had specified. When asked by the user in question, OpenAI replied that it was not possible to prevent the system from giving the wrong answer. Although Filter prevent the disclosure of personal data, but the date of birth cannot be filtered out without other data also being affected. Blocking all user data would then violate the right to freedom of expression and information.
On the merits, noyb argues that the Access right was violated because OpenAI did not provide any information on personal data in the model (only on the user account), although personal data was obviously included.
Furthermore, the Reporting right violated. The right to express an opinion does not cover the processing of false personal data, and a technical impossibility of correction is no justification:
In the present case, however, the respondent cannot invoke its freedom of expression. In the area of data protection, untrue statements of fact do not fall within the scope of protection of freedom of expression. Furthermore, the (false) date of birth of the complainant would not contribute anything to a debate of public interest. The respondent was also unable to cite any legal provision under Article 85 GDPR that would allow a departure from the principle of accuracy in favor of the complainant’s right to express an opinion – which is not applicable here.
It must be stressed that the claimed technically impossibility to erase or rectify the data subject’s date of birth without blocking other relevant pieces of information is by no means a valid justification to derogate to the principle of accuracy under Article 5(1)(d) GDPR. The fact that a software developed by a controller is unable to comply with the law makes the processing simply unlawful – but never the law inapplicable.
noyb is therefore demanding an investigation and remedial measures and is proposing a fine. OpenAI in the USA is also the responsible party – there is at least a shared responsibility with the Irish OpenAI company, which was probably only set up pro forma.
However, an inaccurate statement by an LLM is not necessarily “false” – not if the recipient must be aware that statements are nothing but Statistical probabilities are false. Such approximate statements are only false if they are presented as fact, i.e. if the legitimate understanding of the public does not correspond to reality. One may also ask whether a right to information should not be excluded because personal data in the model is, so to speak, available in the “brain” – in any case, a person would not have to provide information about their thoughts.
The fact that an LLM cannot be changed in a targeted manner, but only over the course of training, is indeed reminiscent of a brain. If the Model therefore like a person which of course it is not de lege lata, but which would not be completely absurd as a broad analogy – there would be no right to information in terms of data protection law, as mentioned above, because “knowledge in your head” is not to be provided with information (cf. here). Such an analogy would not necessarily be wrong from a copyright perspective, or at least it could solve some difficulties. The training would be the enjoyment of the work, which is free and not dependent on a limitation rule, and the output could in turn be subject to copyright rights, which the “client”, i.e. the owner of the model, would be entitled to in accordance with existing regulations. In terms of tax law, the “person AI model” would probably have to be localized at the market location or at the recipient of the service in order to establish a permanent establishment or a recipient principle as in the case of VAT. Not that an AI would be human-like (the opposite is more likely to be the case), but certain analogies suggest themselves.