EU Com­mis­si­on: Gui­de­lines on pro­hi­bi­ted AI practices

Authority(ies): 

Branch(es): 

Laws:

Content 

The EU Com­mis­si­on has on Febru­ary 4 2025 Gui­de­lines on pro­hi­bi­ted AI prac­ti­ces after the AI Act published:

The­se pro­hi­bi­ti­ons are beca­me effec­ti­ve on Febru­ary 2, 2025 (and the sanc­tions will beco­me effec­ti­ve on August 2, 2025)

The gui­de­lines explain the con­cept of pro­hi­bi­ted prac­ti­ces and con­tain examp­les of their appli­ca­ti­on. They are based on the Commission’s man­da­te under Art. 96 AIA to issue gui­de­lines for prac­ti­cal imple­men­ta­ti­on (see our FAQ AI Act). Cer­tain prac­ti­ces are pro­hi­bi­ted (Use cases – the pla­cing on the mar­ket, com­mis­sio­ning or use of systems that do or are inten­ded to do pro­hi­bi­ted things) in the categories:

  • Mani­pu­la­ti­on
  • Exploi­ting weakness
  • Social scoring
  • Bio­me­tric categorization
  • Emo­ti­on recognition
  • Pre­dic­ti­ve policing
  • Scra­ping with facial recognition
  • Bio­me­tric real-time remo­te iden­ti­fi­ca­ti­on in public

The gui­de­lines pri­ma­ri­ly deal with the inter­pre­ta­ti­on of the­se use cases. They also deal with a num­ber of rela­ted terms:

  • the Pla­cing on the mar­ket (Art. 3(9) AI Act), which also inclu­des making it acce­s­si­ble via an API;
  • the Com­mis­sio­ning (Art. 3(11) AI Act), which inclu­des the trans­fer for initi­al use by a third par­ty, but also the company’s own initi­al use (“in-hou­se deve­lo­p­ment and deployment”);
  • the Usenot defi­ned in the AI Act; any use after pla­cing on the mar­ket or put­ting into ser­vice. The pro­hi­bi­ted prac­ti­ces also include misu­se, inclu­ding unfo­re­seeable misu­se (i.e. not only fore­seeable misu­se as in Art. 3(12) and (13) AI Act);
  • Pro­vi­der – not­hing new;
  • DeployerThis is the body that uses the AI system (AIS) (not the employees, but their employer), and does so “under its aut­ho­ri­ty”. This means the fol­lo­wing (cf. Rosen­thal: “It makes sen­se here to fall back on the prac­ti­ce of the com­pa­ra­ble con­cept of the “respon­si­ble par­ty” or “con­trol­ler” in data protection”):

    Aut­ho­ri­ty’ over an AI system should be under­s­tood as assum­ing respon­si­bi­li­ty over the decis­i­on to deploy the system and over the man­ner of its actu­al use.

    The pro­vi­der may not place AIS and GPAI on the mar­ket or put them into ope­ra­ti­on if their pro­hi­bi­ted use is “rea­son­ab­ly likely”. In the case of a GPAI that is used for a chat­bot, the pro­vi­der should the­r­e­fo­re install secu­ri­ty mea­su­res. For its part, the deployer may not use AIS for pro­hi­bi­ted pur­po­ses. Pro­vi­ders should also pre­vent pro­hi­bi­ted use by deployers con­trac­tual­ly exclude and – depen­ding on the case – they should also be used by the deployer. moni­tor. If they beco­me awa­re of a pro­hi­bi­ted use, they should also react.

The Com­mis­si­on goes on to Appli­ca­ti­on exclu­si­ons one for

  • the area of natio­nal secu­ri­ty and the armed forces
  • Mutu­al legal assi­stance and judi­cial cooperation
  • Rese­arch and development
  • exclu­si­ve­ly per­so­nal acti­vi­ty and
  • FOSS.

Other topics include the Rela­ti­on­ship of the AI Act to other decrees and the Enforce­ment of the prohibitions.