Take-Aways (AI)
  • The AEPD impo­sed a fine of EUR 30,000 becau­se the voices of underage vic­tims and per­pe­tra­tors were not suf­fi­ci­ent­ly anony­mi­zed in an online post.
  • Voices are con­side­red per­so­nal data; their uni­que­ness and AI-sup­port­ed reco­gniza­bi­li­ty increa­se iden­ti­fi­ca­ti­on risks.
  • Media must check tech­ni­cal­ly rea­sonable dis­tor­ti­ons (e.g. voice dis­tor­ti­on) and TOMs; DSFA are recom­men­ded for risk assessment.

The Spa­nish data pro­tec­tion aut­ho­ri­ty AEPD impo­sed a fine of EUR 30,000 on a media com­pa­ny in a decis­i­on dated 15 May 2025 becau­se the voices of at least three minors – inclu­ding the vic­tims and per­pe­tra­tors of a vio­lent crime – were not suf­fi­ci­ent­ly anony­mi­zed in a news artic­le published online. Alt­hough the faces were pixel­a­ted, the voices were not distorted.

The decisi­ve fac­tor for the AEPD was that the vote repres­ents per­so­nal data. Due to its uni­que­ness and reco­gniza­bi­li­ty, a voice can iden­ti­fy a per­son or at least make it iden­ti­fia­ble (ori­gi­nal in Spanish):

Accor­ding to Artic­le 4 (1) of the Gene­ral Data Pro­tec­tion Regu­la­ti­on (GDPR), a person’s voice is per­so­nal data as it enables iden­ti­fi­ca­ti­on and the­r­e­fo­re falls within the scope of the GDPR: […] 

The voice is a per­so­nal, indi­vi­du­al cha­rac­te­ri­stic of every natu­ral per­son and is defi­ned by pitch, volu­me and tim­bre. It pos­s­es­ses Uni­que and unmist­aka­ble cha­rac­te­ri­sticsthat enable a direct assign­ment to a spe­ci­fic per­son. Through the voice can also reve­al an individual’s age, gen­der, sta­te of health, per­so­na­li­ty, cul­tu­ral back­ground and hor­mo­n­al, emo­tio­nal and psy­cho­lo­gi­cal sta­te. Ele­ments such as expres­si­on, idiolect or into­na­ti­on are also per­so­nal data if they are con­side­red in con­nec­tion with the voice.

The­r­e­fo­re, the report 139/2017 of the Legal Ser­vice of this aut­ho­ri­ty sta­tes that “the image and voice of a per­son are per­so­nal data, as well as any infor­ma­ti­on that allo­ws their iden­ti­ty to be estab­lished direct­ly or indirectly (…)”.

The­re is no more detail­ed ana­ly­sis of the pro­ba­bi­li­ty of iden­ti­fi­ca­ti­on. But the Iden­ti­fi­ca­ti­on is made easier by AI:

Today’s tech­no­lo­gy, espe­ci­al­ly tools based on arti­fi­ci­al intel­li­gence, make it pos­si­ble to iden­ti­fy a per­son by their voice.

The trans­mis­si­on of the unal­te­red voices was also not neces­sa­ry for the infor­ma­ti­on of the public and an ali­enati­on of the voices ana­log­ous to pixel­a­ti­on would have been rea­sonable. – In addi­ti­on to the fine, the com­pa­ny was obli­ged to take TOMs to pre­vent simi­lar violations.

The fol­lo­wing con­clu­si­ons are obvious:

  • In prac­ti­cal terms, the decis­i­on is not sur­pri­sing. The con­cepts of per­so­nal data (and par­ti­cu­lar­ly sen­si­ti­ve per­so­nal data) are gene­ral­ly inter­pre­ted broadly.
  • The Con­cept of per­so­nal data is inter­pre­ted here in a risk-ori­en­ted man­ner. In the case of minors, the per­so­nal refe­rence is appar­ent­ly under­s­tood more broad­ly, and the risks of use in the case of voice recor­dings also appear to have a retroac­ti­ve effect. Con­cep­tual­ly, this is actual­ly wrong. The pro­ba­bi­li­ty of iden­ti­fi­ca­ti­on does not increa­se per se becau­se data rela­tes to minors or is par­ti­cu­lar­ly wort­hy of pro­tec­tion. Rather, this would have to be exami­ned on a case-by-case basis.
  • That AI is wide­ly availableIn prac­ti­ce, howe­ver, this is also likely to lead to a hig­her assess­ment of the iden­ti­fi­ca­ti­on pos­si­bi­li­ties and thus to an exten­si­on of the scope of appli­ca­ti­on of data pro­tec­tion law. This is not neces­s­a­ri­ly wrong, but in turn does not replace an exami­na­ti­on in indi­vi­du­al cases.
  • In Data pro­tec­tion impact assess­ments the­se fac­tors should be taken into account, in par­ti­cu­lar the pos­si­bi­li­ties of iden­ti­fi­ca­ti­on by third par­ties, taking into account cur­rent tech­no­lo­gy or that expec­ted in the near future. The tech­ni­cal pos­si­bi­li­ties for anony­mizati­on (e.g. voice dis­tor­ti­on) that can be used can also be exami­ned. DPI­As, which are often car­ri­ed out on a vol­un­t­a­ry basis, are sui­ta­ble for such an exami­na­ti­on (Tem­p­la­te). They can also lead to the result that a suf­fi­ci­ent pos­si­bi­li­ty of iden­ti­fi­ca­ti­on can­not be assumed.