EDSA: Gui­de­lines on decep­ti­ve mea­su­res in social media.

The Euro­pean Data Pro­tec­tion Aut­ho­ri­ty EDSA has – alre­a­dy on Febru­ary 14, 2023 – published the 74-page ver­si­on 2.0 of the Gui­de­lines on Decep­ti­ve Mea­su­res in Social Media (Gui­de­lines 03/2022 on Decep­ti­ve design pat­terns in social media plat­form inter­faces: how to reco­g­nise and avo­id them) has been published. The pre­vious ver­si­on 1.0 had been ope­ned for public con­sul­ta­ti­on on March 14, 2022.

The gui­de­lines are based on the GDPR and pre­sent in gre­at detail a num­ber of prac­ti­ces that are decep­ti­ve from the per­spec­ti­ve of EDSA or its mem­bers, as a gui­de to data pro­tec­tion com­pli­ant design or check­list for review and each with design gui­dance. Appen­dix 2 pro­vi­des addi­tio­nal “best prac­ti­ce” recom­men­da­ti­ons to faci­li­ta­te com­pli­ance with the GDPR.

Over­all, the EDSA’s recom­men­da­ti­ons can be sum­ma­ri­zed as fol­lows: the ope­ra­tor of a social media site should behave decent­ly and put hims­elf in the user’s shoes, the lat­ter from the per­spec­ti­ve of a user with a short atten­ti­on span and no cri­ti­cal atti­tu­de. It vio­la­tes this not only if the user is taken for a fool or hara­s­sed, but also, for exam­p­le, if he can­not dele­te an account, but can only deac­ti­va­te it, a com­mon but annoy­ing pro­ce­du­re. Inad­mis­si­ble would be, for example:

Exam­p­le 56: In the pro­cess of dele­ting their account, users are pro­vi­ded with two opti­ons to choo­se from: To dele­te their account or to pau­se it. By default, the paus­ing opti­on is selected.

Cate­go­ries of pro­hi­bi­ted practices

The fol­lo­wing cate­go­ries rela­te to con­tent (e.g. wor­ding) or the design of social media pages (user inter­face) are dis­cus­sed in the guide:

Over­loa­ding„:

Over­loa­ding means users are con­fron­ted with an avalanche/lar­ge quan­ti­ty of requests, infor­ma­ti­on, opti­ons or pos­si­bi­li­ties in order to prompt them to share more data or unin­ten­tio­nal­ly allow per­so­nal data pro­ce­s­sing against the expec­ta­ti­ons of the data sub­ject. The fol­lo­wing three decep­ti­ve design pat­tern types fall into this cate­go­ry: Con­ti­nuous promp­ting, Pri­va­cy Maze and Too Many Options.

Exam­p­le: A user is prompt­ed to enter his pho­ne num­ber every time he logs in.

Skip­ping„:

Skip­ping means desig­ning the inter­face or user jour­ney in a way that users for­get or do not think about all or some of the data pro­tec­tion aspects. The fol­lo­wing two decep­ti­ve design pat­tern types fall into this cate­go­ry: Decep­ti­ve Snug­ness and Look over there

One exam­p­le, accor­ding to EDSA, would be to give the user choices, but to empha­si­ze one – the non-data-saving one – e.g. by a cor­re­spon­ding gra­phi­cal design (gray­ing out the other opti­ons). This would vio­la­te the prin­ci­ple of “pri­va­cy by default”:

Exam­p­le 9 shows a Decep­ti­ve Snug­ness pat­tern, as it is not the opti­on offe­ring the hig­hest level of data pro­tec­tion that is sel­ec­ted, and the­r­e­fo­re acti­va­ted, by default. In addi­ti­on, the default effect of this pat­tern nud­ges users to keep the pre-sel­ec­tion, i.e. to neither take time to con­sider the other opti­ons at this stage nor to go back to chan­ge the set­ting at a later stage.

For Swiss law, such a design cer­tain­ly does not vio­la­te the prin­ci­ple of pri­va­cy by default, becau­se a choice is sug­ge­sted to the user, but the user con­firms it. Only if the con­fir­ma­ti­on is per­cei­ved as qua­si-void, i.e. the decep­ti­on goes so far, could one speak of a violation.

Stir­ring„:

Stir­ring affects the choice users would make by appe­al­ing to their emo­ti­ons or using visu­al nud­ges. The fol­lo­wing two decep­ti­ve design pat­tern types fall into this cate­go­ry: Emo­tio­nal Stee­ring and Hid­den in plain sight.

An exam­p­le, accor­ding to EDSA, would be when users are asked to dis­c­lo­se more than the requi­red data:

Tell us about your ama­zing self! We can’t wait, so come on right now and let us know!”

Obs­truc­ting„:

Obs­truc­ting means hin­de­ring or blocking users in their pro­cess of beco­ming infor­med or mana­ging their data by making the action hard or impos­si­ble to achie­ve. The fol­lo­wing three decep­ti­ve design pat­tern types fall into this cate­go­ry: Dead end, Lon­ger than neces­sa­ry and Mis­lea­ding action

Fuck­le„:

Fick­le means the design of the inter­face is incon­si­stent and not clearmaking it hard for the user to navi­ga­te the dif­fe­rent data pro­tec­tion con­trol tools and to under­stand the pur­po­se of the pro­ce­s­sing. The fol­lo­wing four decep­ti­ve design pat­tern types fall into this cate­go­ry: Lack­ing hier­ar­chy, Decon­tex­tua­li­sing, Incon­si­stent Inter­face and Lan­guage Discontinuity

Left in the dark„:

Left in the dark means an inter­face is desi­gned in a way to hide infor­ma­ti­on or data pro­tec­tion con­trol tools or to lea­ve users unsu­re of how their data is pro­ce­s­sed and what kind of con­trol they might have over it regar­ding the exer­cise of their rights. The fol­lo­wing two decep­ti­ve design pat­tern types fall into this cate­go­ry: Con­flic­ting infor­ma­ti­on and Ambi­guous wor­ding or information

For this pur­po­se, a sum­ma­ry and the fol­lo­wing over­view can be found in the appendix:

The­se prac­ti­ces may vio­la­te Art. 5 GDPR (pro­ce­s­sing prin­ci­ples), and in the case of cons­ents, pos­si­bly also Art. 4 No. 11 and Art. 7 GDPR. More spe­ci­fi­cal­ly, they may under­mi­ne the fol­lo­wing con­cerns, which the EDSA attri­bu­tes to the GDPR:

  • Auto­no­my – Data sub­jects should be gran­ted the hig­hest degree of auto­no­my pos­si­ble to deter­mi­ne the use made of their per­so­nal data, as well as auto­no­my over the scope and con­di­ti­ons of that use or processing.
  • Inter­ac­tion – Data sub­jects must be able to com­mu­ni­ca­te and exer­cise their rights in respect of the per­so­nal data pro­ce­s­sed by the controller.
  • Expec­ta­ti­on – Pro­ce­s­sing should cor­re­spond with data sub­jects’ rea­sonable expectations.
  • Con­su­mer choice – The con­trol­lers should not “lock in” their users in an unfair man­ner. When­ever a ser­vice pro­ce­s­sing per­so­nal data is pro­prie­ta­ry, it may crea­te a lock-in to the ser­vice, which may not be fair, if it impairs the data sub­jects’ pos­si­bi­li­ty to exer­cise their right of data por­ta­bi­li­ty in accordance with Artic­le 20 GDPR.
  • Power balan­ce – Power balan­ce should be a key objec­ti­ve of the con­trol­ler-data sub­ject rela­ti­on­ship. ower imba­lan­ces should be avo­ided. When this is not pos­si­ble, they should be reco­gnized and accoun­ted for or with sui­ta­ble countermeasures.
  • No decep­ti­on – Data pro­ce­s­sing infor­ma­ti­on and opti­ons should be pro­vi­ded in an objec­ti­ve and neu­tral way, avo­i­ding any decep­ti­ve or mani­pu­la­ti­ve lan­guage or design.
  • Truthful – the con­trol­lers must make available infor­ma­ti­on about how they pro­cess personal
    data, should act as they decla­re they will and not mis­lead data subjects.

The EDSA dis­cus­ses the­se prac­ti­ces in Sec­tion 3 of the Gui­de­lines, each with a descrip­ti­on, an ana­ly­sis of the appli­ca­ble pro­vi­si­ons of the GDPR, and examples.

Revo­ca­ti­on of consent

The EDSA points out, among other things, that when desig­ning con­sent decla­ra­ti­ons, for exam­p­le, atten­ti­on should be paid to the medi­um in which the con­sent is reque­sted, i.e. the sub­ject of the con­sent and the requi­red infor­ma­ti­on about it should also be cle­ar­ly reco­gnizable on a cell pho­ne. In many other respects, the EDSA fol­lows its well-known strict prac­ti­ce, for exam­p­le with regard to the revo­ca­bi­li­ty of consent:

As an exam­p­le, con­sent can­not be con­side­red valid under the GDPR when con­sent is obtai­ned through only one mou­se-click, swi­pe or keystro­ke, but the with­dra­wal takes more steps, is more dif­fi­cult to achie­ve or takes more time.

And:

Exam­p­le 33: A social media pro­vi­der does not pro­vi­de a direct opt-out from a tar­ge­ted adver­ti­se­ment pro­ce­s­sing even though the con­sent (opt-in) only requi­res one click.

The Gui­de­lines con­tain various other refe­ren­ces to con­sent and its documentation.

Requi­re­ments for pri­va­cy statements

Also Requi­re­ments for pri­va­cy state­ments are addres­sed, for exam­p­le with the fol­lo­wing, undoub­ted­ly cor­rect statement:

Howe­ver, more infor­ma­ti­on does not neces­s­a­ri­ly mean bet­ter infor­ma­ti­on. Too much irrele­vant or con­fu­sing infor­ma­ti­on can obscu­re important con­tent points or redu­ce the likeli­hood of fin­ding them. Hence, the right balan­ce bet­ween con­tent and com­pre­hen­si­ble pre­sen­ta­ti­on is cru­cial in this area. If this balan­ce is not met, decep­ti­ve design pat­terns can occur.

Howe­ver, this is easier said than done, becau­se pri­va­cy state­ments are not writ­ten for the lar­ge mass of users, but for the few excep­ti­ons who will meti­cu­lous­ly search them for loopho­les in the event of a dis­pu­te. If one wants to be safe here, one will often wri­te the pri­va­cy state­ments quite exten­si­ve­ly, which threa­tens to inf­la­te them. With exten­si­ve pri­va­cy state­ments, howe­ver, it helps to have a good gra­phi­cal pre­sen­ta­ti­on, for exam­p­le with pop-up texts such as here, or a clear distinc­tion bet­ween basic state­ments and examp­les, or good struc­tu­ring (EDSA also points this out), etc. Also the Pri­va­cy Icons can make a con­tri­bu­ti­on to this.

To the Lan­guage requi­re­ments for pri­va­cy state­ments EDSA says,

Users will face this decep­ti­ve design pat­tern [“lan­guage dis­con­ti­nui­ty”] when data pro­tec­tion infor­ma­ti­on is not pro­vi­ded in the offi­ci­al lan­guages of the coun­try whe­re they live, whe­re­as the ser­vice is pro­vi­ded in that language 

This may be under­s­tood to mean that data pro­tec­tion infor­ma­ti­on must be pre­sen­ted in the lan­guage in which the offer can be used. Howe­ver, the fol­lo­wing is also inadmissible:

Each time users call up cer­tain pages, such as the help page, the­se auto­ma­ti­cal­ly switch to the lan­guage of the coun­try users are in, even if they have pre­vious­ly sel­ec­ted a dif­fe­rent language.

If a ser­vice can be used on a web­site and an app, the requi­red infor­ma­ti­on would basi­cal­ly be Pro­vi­de direct­ly in the app:

It is important to note that even stron­ger effects than tho­se cau­sed by too many lay­ers can occur when not only seve­ral devices, but also seve­ral apps pro­vi­ded by the same social media plat­form, such as spe­cial mes­sen­ger apps, are used. Users who use that kind of secon­da­ry app would face grea­ter obs­ta­cles and efforts if they have to call up the brow­ser ver­si­on or the pri­ma­ry app to obtain data pro­tec­tion rela­ted infor­ma­ti­on. In such a situa­ti­on, which is not only cross-device but cross-appli­ca­ti­on, the rele­vant infor­ma­ti­on must always be direct­ly acce­s­si­ble no mat­ter how users use the plat­form.

In other respects, too, the EDSA remains strict and rei­te­ra­tes the opi­ni­on that in data pri­va­cy state­ments are For­mu­la­ti­ons such as “we might use your data for…” or “our ser­vices” are too gene­ric. The requi­re­ments for the con­tent of the infor­ma­ti­on also remain high:

Exam­p­le 46: The social media plat­form does not expli­ci­t­ly sta­te that users in the EU have the right to lodge a com­plaint with a super­vi­so­ry aut­ho­ri­ty, but only men­ti­ons that in some – wit­hout men­tio­ning which – count­ries, the­re are data pro­tec­tion aut­ho­ri­ties which the social media pro­vi­der coope­ra­tes with regar­ding complaints.

With the Refe­ren­ces to the data sub­ject rights the EDSA goes even further:

Exam­p­le 49: The para­graph under the sub­tit­le “right to access” in the pri­va­cy poli­cy explains that users have the right to obtain infor­ma­ti­on under Artic­le 15 (1) GDPR. Howe­ver, it only men­ti­ons users’ pos­si­bi­li­ty to recei­ve a copy of their per­so­nal data. The­re is no direct link visi­ble to exer­cise the copy com­po­nent of the right of access under Artic­le 15 (3) GDPR. Rather, the first three words in “You can have a copy of your per­so­nal data” are slight­ly under­lined. When hove­ring over the­se words with the users’ mou­se, a small box is dis­play­ed with a link to the settings.

This is a case of “hid­den in plain sight”. Such a design should pro­ba­b­ly rather be men­tio­ned among the best practices.

Secu­ri­ty breach notification

The EDSA also addres­ses this issue. A secu­ri­ty breach exists, for exam­p­le, if an app can access more data as a result of a pro­gramming error. In the case of high risks, secu­ri­ty brea­ches must also be com­mu­ni­ca­ted to the affec­ted par­ties, and here, too, decep­ti­ve designs must be avo­ided, for exam­p­le by lin­king the infor­ma­ti­on about the type and scope of the brea­ches with “unspe­ci­fic and irrele­vant infor­ma­ti­on and the impli­ca­ti­ons and pre­cau­tio­na­ry mea­su­res the con­trol­ler has taken or sug­gests to take”: “This part­ly irrele­vant infor­ma­ti­on can be mis­lea­ding and users affec­ted by the breach might not ful­ly under­stand the impli­ca­ti­ons of the breach or unde­re­sti­ma­te the (poten­ti­al) effects.

Also inad­mis­si­ble the following:

Exam­p­le 20:The con­trol­ler only refers to actions of a third par­ty, that the data breach was ori­gi­na­ted by a third par­ty (e.g. a pro­ces­sor) and that the­r­e­fo­re no secu­ri­ty breach occur­red. The con­trol­ler also high­lights some good prac­ti­ces that have not­hing to do with the actu­al breach.

The con­trol­ler decla­res the seve­ri­ty of the data breach in rela­ti­on to its­elf or to a pro­ces­sor, rather than in rela­ti­on to the data subject.

Or:

Exam­p­le 21: Through a data breach on a social media plat­form, seve­ral sets of health data were acci­den­tal­ly acce­s­si­ble to unaut­ho­ri­zed users. The social media pro­vi­der only informs users that “spe­cial cate­go­ries of per­so­nal data” were acci­den­tal­ly made public.

Or:

Exam­p­le 22: The con­trol­ler only pro­vi­des vague details when iden­ti­fy­ing the cate­go­ries of per­so­nal data affec­ted, e. g. the con­trol­ler refers to docu­ments sub­mit­ted by users wit­hout spe­ci­fy­ing what cate­go­ries of per­so­nal data the­se docu­ments include and how sen­si­ti­ve they were.

Or:

Exam­p­le 23: When report­ing the breach, the con­trol­ler does not suf­fi­ci­ent­ly spe­ci­fy the cate­go­ry of the affec­ted data sub­jects, e. g., the con­trol­ler only men­ti­ons that con­cer­ned data sub­jects were stu­dents, but the con­trol­ler does not spe­ci­fy whe­ther the data sub­jects are minors or groups of vul­nerable data subjects.

Or:

Exam­p­le 24: A con­trol­ler decla­res that per­so­nal data was made public through other sources when it noti­fi­es the breach to the Super­vi­so­ry Aut­ho­ri­ty and to the data sub­ject. The­r­e­fo­re, the data sub­ject con­siders that the­re was no secu­ri­ty breach.

Aut­ho­ri­ty

Area

Topics

Rela­ted articles

Sub­scri­be