- The AEPD imposed a fine of EUR 30,000 because the voices of underage victims and perpetrators were not sufficiently anonymized in an online post.
- Voices are considered personal data; their uniqueness and AI-supported recognizability increase identification risks.
- Media must check technically reasonable distortions (e.g. voice distortion) and TOMs; DSFA are recommended for risk assessment.
The Spanish data protection authority AEPD imposed a fine of EUR 30,000 on a media company in a decision dated 15 May 2025 because the voices of at least three minors – including the victims and perpetrators of a violent crime – were not sufficiently anonymized in a news article published online. Although the faces were pixelated, the voices were not distorted.
The decisive factor for the AEPD was that the vote represents personal data. Due to its uniqueness and recognizability, a voice can identify a person or at least make it identifiable (original in Spanish):
According to Article 4 (1) of the General Data Protection Regulation (GDPR), a person’s voice is personal data as it enables identification and therefore falls within the scope of the GDPR: […]
The voice is a personal, individual characteristic of every natural person and is defined by pitch, volume and timbre. It possesses Unique and unmistakable characteristicsthat enable a direct assignment to a specific person. Through the voice can also reveal an individual’s age, gender, state of health, personality, cultural background and hormonal, emotional and psychological state. Elements such as expression, idiolect or intonation are also personal data if they are considered in connection with the voice.
Therefore, the report 139/2017 of the Legal Service of this authority states that “the image and voice of a person are personal data, as well as any information that allows their identity to be established directly or indirectly (…)”.
There is no more detailed analysis of the probability of identification. But the Identification is made easier by AI:
Today’s technology, especially tools based on artificial intelligence, make it possible to identify a person by their voice.
The transmission of the unaltered voices was also not necessary for the information of the public and an alienation of the voices analogous to pixelation would have been reasonable. – In addition to the fine, the company was obliged to take TOMs to prevent similar violations.
The following conclusions are obvious:
- In practical terms, the decision is not surprising. The concepts of personal data (and particularly sensitive personal data) are generally interpreted broadly.
- The Concept of personal data is interpreted here in a risk-oriented manner. In the case of minors, the personal reference is apparently understood more broadly, and the risks of use in the case of voice recordings also appear to have a retroactive effect. Conceptually, this is actually wrong. The probability of identification does not increase per se because data relates to minors or is particularly worthy of protection. Rather, this would have to be examined on a case-by-case basis.
- That AI is widely availableIn practice, however, this is also likely to lead to a higher assessment of the identification possibilities and thus to an extension of the scope of application of data protection law. This is not necessarily wrong, but in turn does not replace an examination in individual cases.
- In Data protection impact assessments these factors should be taken into account, in particular the possibilities of identification by third parties, taking into account current technology or that expected in the near future. The technical possibilities for anonymization (e.g. voice distortion) that can be used can also be examined. DPIAs, which are often carried out on a voluntary basis, are suitable for such an examination (Template). They can also lead to the result that a sufficient possibility of identification cannot be assumed.