Take-Aways (AI)
  • Der drit­te Ent­wurf des Code of Prac­ti­ce unter­stützt GPA­IM-Anbie­ter bei der Ein­hal­tung des AI Act; recht­lich nicht bin­dend, aber konformitätsfördernd.
  • GPAIM sind gro­sse, all­ge­mein nutz­ba­re Model­le; Anbie­ter müs­sen tech­ni­sche Doku­men­ta­ti­on, Trainingsdaten‑Zusammenfassungen und Urhe­ber­rechts­stra­te­gien bereitstellen.
  • GPAISR sind GPAIM mit syste­mi­schen Risi­ken; sie unter­lie­gen Mel­de­pflich­ten, stan­dar­di­sier­ten Bewer­tun­gen und erwei­ter­ten Risikominderungs‑Pflichten.
  • Der Ent­wurf defi­niert 18 Ver­pflich­tun­gen (2 für alle GPAIM, 16 für GPAISR) in Trans­pa­renz, Urhe­ber­recht sowie Safe­ty and Secu­ri­ty mit detail­lier­ten Maßnahmen.

The EU Com­mis­si­on has published the third draft of a Code of Prac­ti­ce for gene­ral-pur­po­se AI models:

GPAIM are AI models with gene­ral usa­bi­li­ty, which is assu­med if a model has at least one bil­li­on para­me­ters and has been “trai­ned under com­pre­hen­si­ve self-moni­to­ring” with a lar­ge amount of data. A GPAIM is not an AI system (AIS), nor is it an AIS with a gene­ral pur­po­se – a GPAIM only beco­mes an AIS when addi­tio­nal com­pon­ents are added. The obli­ga­ti­ons of GPAIM pro­vi­ders are gover­ned by the AI Act (AIA) in a sepa­ra­te chap­ter. Pro­vi­ders must, among other things tech­ni­cal docu­men­ta­ti­on crea­te the Pro­vi­ders of down­stream AIS pro­vi­de fur­ther infor­ma­ti­on about the GPAIM via a Stra­tegy for com­pli­ance with EU copy­right law have a sum­ma­ry of the Trai­ning data and, if neces­sa­ry, appoint an aut­ho­ri­zed representative.

GPAISR are GPAIM with syste­mic risks, i.e. risks that have a signi­fi­cant impact “on public health, safe­ty, public secu­ri­ty, fun­da­men­tal rights or socie­ty as a who­le” due to the “reach” of the GPAIM or poten­ti­al nega­ti­ve con­se­quen­ces and can spread throug­hout the value chain. GPAISR must be noti­fi­ed to the Com­mis­si­on and its pro­vi­ders have addi­tio­nal obli­ga­ti­ons – they must assess the GPAIM in a stan­dar­di­zed way, assess and miti­ga­te syste­mic risks at EU level, docu­ment infor­ma­ti­on on serious inci­dents and pos­si­ble reme­di­al mea­su­res and, if neces­sa­ry, inform the AI Office and the com­pe­tent natio­nal aut­ho­ri­ties, and ensu­re suf­fi­ci­ent cybersecurity.

See our FAQ on the AI Act.

Against this back­ground, the Code of Prac­ti­ce a gui­de to help GPAIM pro­vi­ders com­ply with the AEOI (Art. 56 AEOI) – to bridge the gap bet­ween the obli­ga­ti­ons of pro­vi­ders, which will app­ly from August 2025, and the intro­duc­tion of stan­dards, expec­ted from August 2027. It is not legal­ly bin­ding, but com­pli­ance with it crea­tes a pre­sump­ti­on of con­for­mi­ty with the GPAI model pro­vi­der obli­ga­ti­ons (Reci­tal 117: “Pro­vi­ders should be able to rely on codes of prac­ti­ce to demon­stra­te com­pli­ance with the obli­ga­ti­ons”). See here for more infor­ma­ti­on on the Code of Practice.

The third draft is likely to be lar­ge­ly matu­re, but will pro­ba­b­ly under­go a few more adjust­ments until its final adop­ti­on in May 2025 as a result of the feed­back pha­se, which runs until March 30, and workshops.

The draft pro­vi­des for 18 Obli­ga­ti­ons two for all GPAI pro­vi­ders and a fur­ther 16 for GPAISR pro­vi­ders. The­se obli­ga­ti­ons are divi­ded into three main areas:

Who is affected Com­mit­ment Mea­su­re
Trans­pa­ren­cy
all GPAIM Docu­men­ta­ti­on (I.1) Mea­su­re I.1.1: Crea­te and main­tain up-to-date model docu­men­ta­ti­on to meet the requi­re­ments of Artic­le 53(1)(a) and (b) of the AI Act.
Mea­su­re I.1.2: Pro­vi­de infor­ma­ti­on to down­stream pro­vi­ders and the AI Office on request to enable the inte­gra­ti­on of the models into AI systems and to sup­port the super­vi­so­ry tasks of the natio­nal com­pe­tent authorities.
Mea­su­re I.1.3: Ensu­re the qua­li­ty, secu­ri­ty and inte­gri­ty of the docu­men­ted infor­ma­ti­on to ensu­re the trust­wort­hi­ness of the models.
Amend­mentIntro­duc­tion of a user-fri­end­ly model docu­men­ta­ti­on form to sim­pli­fy documentation.
Copy­right
all GPAIM Copy­right Direc­ti­ve (I.2) Mea­su­re I.2.1: Draw up and imple­ment an up-to-date copy­right direc­ti­ve to ensu­re com­pli­ance with EU legis­la­ti­on on copy­right and rela­ted rights.
Mea­su­re I.2.2: Iden­ti­fi­ca­ti­on of and com­pli­ance with rights reser­ved under Artic­le 4(3) of Direc­ti­ve (EU) 2019/790.
Mea­su­re I.2.3: Imple­men­ta­ti­on of tech­no­lo­gies for reco­gnizing and com­ply­ing with copyrights.
Mea­su­re I.2.4: Crea­ti­on of pro­ce­s­ses for hand­ling copy­right infringements.
Mea­su­re I.2.5: Docu­men­ta­ti­on of com­pli­ance with copyrights.
Mea­su­re I.2.6: Regu­lar­ly review and update the Copy­right Directive.
Amend­mentStric­ter requi­re­ments for the iden­ti­fi­ca­ti­on of and com­pli­ance with copyrights.
Safe­ty and Security
GPAISR Risk iden­ti­fi­ca­ti­on and ana­ly­sis (II.1) Mea­su­res for the con­ti­nuous iden­ti­fi­ca­ti­on of syste­mic risks using the CoP’s risk taxo­no­my. Ana­ly­sis of the pro­ba­bi­li­ty and seve­ri­ty of risks and cate­go­rizati­on into risk levels.
Amend­mentMore detail­ed risk taxo­no­my and ana­ly­sis in the third draft.
GPAISR Coll­ec­tion of evi­dence and model eva­lua­ti­on (II.2) Mea­su­res to coll­ect evi­dence of syste­mic risk and assess the capa­bi­li­ties and limi­ta­ti­ons of AI models in accordance with the CoP rules.
Amend­ment: Stric­ter requi­re­ments for evi­dence coll­ec­tion and model evaluation.
GPAISR Risk assess­ment cycle (II.3) Mea­su­res for con­ti­nuous risk assess­ment during the enti­re life cycle of the model.
Amend­mentEmpha­sis on con­ti­nuous monitoring.
GPAISR Risk reduc­tion (II.4) Mea­su­res for assig­ning each risk level to appro­pria­te safe­ty and secu­ri­ty measures.
Amend­ment: More detail­ed risk miti­ga­ti­on measures.
GPAISR Safe­ty and Secu­ri­ty Reports (SSR) (II.5) Mea­su­res to crea­te and regu­lar­ly update safe­ty and secu­ri­ty reports to docu­ment risk and miti­ga­ti­on assessments.
Amend­mentRegu­lar updating of the reports.
GPAISR Risk gover­nan­ce (II.6) Mea­su­res to allo­ca­te respon­si­bi­li­ty and resour­ces for syste­mic risks at exe­cu­ti­ve and board level.
Amend­mentStron­ger empha­sis on governance.
GPAISR Secu­ri­ty mea­su­res to pre­vent unaut­ho­ri­zed access (II.7) Mea­su­res to imple­ment secu­ri­ty mea­su­res that at least meet the RAND SL3 secu­ri­ty target.
Amend­ment: Con­cre­te safe­ty tar­gets defined.
GPAISR Safe­ty and secu­ri­ty reports (II.8) Mea­su­res for the pre­pa­ra­ti­on of safe­ty and secu­ri­ty reports con­tai­ning the results of syste­mic risk assess­ment and mitigation.
Amend­ment: More detail­ed reports.
GPAISR Syste­mic risk reduc­tion by design (II.9) Mea­su­res to imple­ment design prin­ci­ples to mini­mi­ze syste­mic risks.
Amend­mentEmpha­sis on fair­ness and transparency.
GPAISR Con­ti­nuous moni­to­ring and updating (II.10) Mea­su­res for con­ti­nuous moni­to­ring and regu­lar updating of the models.
Amend­mentStron­ger empha­sis on con­ti­nuous monitoring.
GPAISR Coope­ra­ti­on with exter­nal part­ners (II.11) Mea­su­res for coope­ra­ti­on with exter­nal part­ners to iden­ti­fy and miti­ga­te syste­mic risks.
Amend­mentIncrea­sed importance of cooperation.
GPAISR Serious Inci­dent Report­ing (II.12) Mea­su­res for moni­to­ring, docu­men­ting and report­ing serious incidents.
Amend­mentMore pre­cise report­ing mechanisms.
GPAISR Non-reta­lia­ti­on pro­tec­tion (II.13) Mea­su­res to pro­tect employees who report risks.
Amend­ment: Grea­ter empha­sis on protection.
GPAISR Noti­fi­ca­ti­ons (II.14) Mea­su­res to regu­lar­ly inform the AI Office about the imple­men­ta­ti­on of the obligations.
Amend­mentRegu­lar reporting.
GPAISR Docu­men­ta­ti­on (II.15) Mea­su­res to docu­ment rele­vant infor­ma­ti­on in accordance with the AI Act.
Amend­mentMore detail­ed docu­men­ta­ti­on requirements.
GPAISR Public trans­pa­ren­cy (II.16) Mea­su­res to publish infor­ma­ti­on on public trans­pa­ren­cy regar­ding syste­mic risks.
Amend­mentIncrea­sed transparency.