Take-Aways (AI)
  • Exten­si­on of the plat­form work approach to all employment rela­ti­on­ships with addi­tio­nal pro­tec­tion against algo­rith­mic management.
  • Pro­hi­bi­ti­on of pro­ce­s­sing sen­si­ti­ve data such as emo­ti­ons, neu­ro­da­ta, pri­va­te com­mu­ni­ca­ti­on and loca­ti­on out­side of work.
  • Duties of trans­pa­ren­cy, con­sul­ta­ti­on, effec­ti­ve human over­sight and inte­gra­ti­on of risks into health and safe­ty systems.

On Decem­ber 17, 2025, the Employment Com­mit­tee of the EU Par­lia­ment Recom­men­da­ti­ons to the Com­mis­si­on for a Direc­ti­ve on algo­rith­mic manage­ment in the work­place adopted (Case file.

Accor­ding to Art. 225 TFEU Par­lia­ment may request the Com­mis­si­on to sub­mit a legis­la­ti­ve pro­po­sal. If the report on the legis­la­ti­ve initia­ti­ve is adopted, the Com­mis­si­on has three months to inform Par­lia­ment of the plan­ned next steps or to justi­fy why it does not com­ply with Parliament’s demands.

In terms of con­tent, the draft fol­lows on from the Plat­form working gui­de­line (see also Pär­li in Jus­let­ter 2024), but extends its approach to all employment rela­ti­on­ships. The list of pro­hi­bi­ti­ons in Art. 5 par­ti­al­ly over­laps with Art. 5 AI Act (pro­hi­bi­ted AI prac­ti­ces), but goes fur­ther, for exam­p­le in the expli­cit pro­hi­bi­ti­on of Neu­ro­sur­veil­lan­ce and the pro­ce­s­sing of data on the emo­tio­nal state.

Need for regulation

Accor­ding to the report, bet­ween a quar­ter and 80% of com­pa­nies in the EU use at least one form of algo­rith­mic manage­ment. 26.5% of employees are said to be super­vi­sed by soft­ware, with 27.4% having tasks assi­gned via software.

The report first iden­ti­fi­es gaps in the exi­sting legal frame­work. The Plat­form Work Direc­ti­ve is based on Plat­form work limited:

[T]he Plat­form Work Directive’s pro­vi­si­ons on algo­rith­mic manage­ment (in par­ti­cu­lar workers‘ rights to trans­pa­ren­cy, human review, worker infor­ma­ti­on and con­sul­ta­ti­on and OSH) only app­ly to per­sons per­forming plat­form work lea­ving other workers incre­a­sing­ly sub­ject to algo­rith­mic manage­ment less pro­tec­ted; under­lines the need to ensu­re equal tre­at­ment of all workers […]

Unli­ke the final ver­si­on, the draft report from sum­mer 2025 con­tai­ned even clea­rer state­ments on the pro­tec­tion gaps in the AI Act and the GDPR:

  • The AI Act clas­si­fi­es work-rela­ted AI systems as high-risk, but focu­ses on mar­ket access and pro­duct safety:

    The AI Act repres­ents a signi­fi­cant step for­ward in regu­la­ting high-risk arti­fi­ci­al intel­li­gence systems, it nevert­hel­ess remains insuf­fi­ci­ent to ful­ly address the chal­lenges posed by algo­rith­mic manage­ment in the work­place. Alt­hough it clas­si­fi­es work-rela­ted AI tools as high-risk, its pri­ma­ry focus is on mar­ket pla­ce­ment, pro­duct safe­ty, and com­pli­ance obli­ga­ti­ons for pro­vi­ders and users, and not on the employer-worker rela­ti­on­ship. Fur­ther­mo­re, the AI Act does not app­ly to algo­rith­mic manage­ment systems that are not AI-based, lea­ving a regu­la­to­ry gap in addres­sing the broa­der impact of digi­tal manage­ment tools on workers‘ rights, working con­di­ti­ons, and social dialogue.

  • The GDPR in turn, dates back to 2016 and was not desi­gned for the spe­ci­fic chal­lenges of data pro­tec­tion in the workplace:

    Regu­la­ti­on (EU) 2016/679 […] dates back to 2016 and was not spe­ci­fi­cal­ly desi­gned to address the par­ti­cu­lar chal­lenges of data pro­tec­tion in the work­place […] Artic­le 15(1), point (h), of Regu­la­ti­on (EU) 2016/679, which lays down the trans­pa­ren­cy requi­re­ments for and the limi­ta­ti­ons of data pro­ce­s­sing, only pro­vi­des for clear pro­hi­bi­ti­ons in the case of ful­ly auto­ma­ted decis­i­on-making pro­ce­s­ses, which are the­r­e­fo­re not suf­fi­ci­ent in most employment-rela­ted con­texts. What is more, Regu­la­ti­on (EU) 2016/679 adopts indi­vi­dua­li­stic approach and does not grant coll­ec­ti­ve rights. Sin­ce the ent­ry into force of Regu­la­ti­on (EU) 2016/679, Artic­le 88 on the pro­tec­tion of workers‘ per­so­nal data has been poor­ly imple­men­ted and remains lar­ge­ly inef­fec­ti­ve in near­ly all Mem­ber States.

Algo­rith­mic management“

The reso­lu­ti­on no lon­ger con­ta­ins a defi­ni­ti­on of algo­rith­mic manage­ment, unli­ke the draft, but refers to the defi­ni­ti­on in the Plat­form Working Directive:

algo­rith­mic manage­ment‘ should be defi­ned as auto­ma­ted moni­to­ring systems and auto­ma­ted decis­i­on-making systems.

The Plat­form Work Direc­ti­ve defi­nes the­se terms in Art. 2:

  • Auto­ma­ted moni­to­ring systems: Systems for moni­to­ring, super­vi­si­on or per­for­mance eva­lua­ti­on by elec­tro­nic means
  • Auto­ma­ted decis­i­on-making systemsSystems that make or sup­port decis­i­ons that signi­fi­cant­ly influence working conditions

Recom­men­da­ti­ons to the Commission

The annex con­ta­ins ele­ven recom­men­da­ti­ons to the EU Com­mis­si­on for a pro­po­sal for a directive:

Trans­pa­ren­cy and infor­ma­ti­on obli­ga­ti­ons (recom­men­da­ti­on 3)

Employers should inform the affec­ted employees and their repre­sen­ta­ti­ves in wri­ting about

  • Use or plan­ned use of algo­rith­mic manage­ment systems
  • Effects on working con­di­ti­ons and employment status
  • Cate­go­ries of data coll­ec­ted, pro­ce­s­sing pur­po­ses and recipients
  • Mecha­nisms of human supervision
  • Trai­ning and sup­port measures

Appli­cants should also be infor­med about auto­ma­ted decis­i­on-making systems.

Con­sul­ta­ti­on obli­ga­ti­ons (recom­men­da­ti­on 4)

The intro­duc­tion of new systems rela­ting to remu­ne­ra­ti­on, eva­lua­ti­on, work orga­nizati­on, task allo­ca­ti­on or working hours should trig­ger con­sul­ta­ti­on obligations.

Pro­hi­bi­ted prac­ti­ces (Recom­men­da­ti­on 5)

The reso­lu­ti­on calls for a ban on the pro­ce­s­sing of the fol­lo­wing data:

  • Emo­ti­ons, moods, brain acti­vi­ty or bio­me­tric data
  • Pri­va­te com­mu­ni­ca­ti­on, also with col­le­agues or employee representatives
  • Beha­vi­or out­side of working hours or in pri­va­te are­as; loca­ti­on track­ing out­side of working hours
  • Data that allow con­clu­si­ons to be drawn about trade uni­on acti­vi­ties or the exer­cise of other fun­da­men­tal rights
  • spe­cial data in accordance with Art. 9 GDPR (health, eth­ni­ci­ty, reli­gi­on, poli­ti­cal opi­ni­on, sexu­al ori­en­ta­ti­on, etc.) and con­clu­si­ons about such data

The­se pro­hi­bi­ti­ons should also app­ly to the appli­ca­ti­on process.

Human super­vi­si­on (recom­men­da­ti­on 6)

The reso­lu­ti­on calls for con­ti­nuous and effec­ti­ve human over­sight of all decis­i­ons, that are made or sup­port­ed by algo­rith­mic management:

The per­sons respon­si­ble for over­sight and eva­lua­ti­on should have the com­pe­tence, trai­ning and aut­ho­ri­ty neces­sa­ry to exer­cise tho­se func­tions, inclu­ding the aut­ho­ri­ty to over­ri­de auto­ma­ted decisions.

Employees should have a Right to decla­ra­ti­on for decis­i­ons that affect key aspects of their employment rela­ti­on­ship. Auto­ma­ti­on is pro­hi­bi­ted for cer­tain decisions:

Decis­i­ons con­cer­ning the initia­ti­on or ter­mi­na­ti­on of employment, the rene­wal or non-rene­wal of a con­trac­tu­al agree­ment, or any chan­ges in remu­ne­ra­ti­on or disci­pli­na­ry action should always be taken by a human being and should be sub­ject to human review.

Occu­pa­tio­nal health and safe­ty (recom­men­da­ti­on 7)

Employers should inte­gra­te the risks of algo­rith­mic systems into their occu­pa­tio­nal health and safe­ty systems, inclu­ding psy­cho­so­cial and ergo­no­mic risks as well as undue pres­su­re on employees.

Super­vi­si­on (recom­men­da­ti­ons 8 and 10)

The Labor inspec­to­ra­tes should be respon­si­ble for moni­to­ring. In addi­ti­on, the Data pro­tec­tion aut­ho­ri­ties moni­tor the appli­ca­ti­on of the pro­vi­si­ons on data pro­ce­s­sing in the employment con­text, in coope­ra­ti­on with the labor authorities.

Pro­por­tio­na­li­ty and SMEs

SMEs should be spared too much bureaucracy:

The pro­po­sal should respect the prin­ci­ple of pro­por­tio­na­li­ty and should ensu­re that the admi­ni­stra­ti­ve and com­pli­ance bur­den impo­sed is appro­pria­te to the size of the employer and the resour­ces at its dis­po­sal, the natu­re of the tech­no­lo­gies used, and the level of the risk invol­ved, par­ti­cu­lar­ly with regard to micro, small and medi­um-sized enterprises.