The AI Act pro­vi­des in Art. 3(56) and Art. 4 for an obli­ga­ti­on refer­red to as “AI Liter­a­cy” (“AI com­pe­tence”), which applies to pro­vi­ders as well as deployers and their employees as well as auxi­lia­ry per­sons, and not only to high-risk systems, but to all AI systems.

Art. 3(56) AIA defi­nes what this is about:

(56) ‘AI liter­a­cy’ means skills, know­ledge and under­stan­ding that allow pro­vi­ders, deployers and affec­ted per­sons, taking into account their respec­ti­ve rights and obli­ga­ti­ons in the con­text of this Regu­la­ti­on, to make an infor­med deployment of AI systems, as well as to gain awa­re­ness about the oppor­tu­ni­ties and risks of AI and pos­si­ble harm it can cause

How to get the­re is deter­mi­ned by Art. 4 AIA:

Pro­vi­ders and deployers of AI systems shall take mea­su­res to ensu­re, to their best ext­ent, a suf­fi­ci­ent level of AI liter­a­cy of their staff and other per­sons deal­ing with the ope­ra­ti­on and use of AI systems on their behalf, taking into account their tech­ni­cal know­ledge, expe­ri­ence, edu­ca­ti­on and trai­ning and the con­text the AI systems are to be used in, and con­side­ring the per­sons or groups of per­sons on whom the AI systems are to be used.

Becau­se this duty is so gene­ral, becau­se it can be tack­led in par­al­lel with other gover­nan­ce acti­vi­ties and becau­se it has been in force sin­ce Febru­ary 2, 2025, it is one of the first duties of the AI Act to come into focus. At the same time, it is rather unspecific.

Against this back­ground, the FAQ of the Euro­pean Com­mis­si­on on Art. 4 AEOI of May 7, 2025 wel­co­me. The fol­lo­wing notes are worth noting:

  • Art. 4 is not a recom­men­da­ti­on, but pro­vi­des for a Man­da­to­ry before;
  • Howe­ver, the­re is some lee­way in terms of con­tent. Orga­nizati­ons should as a mini­mum but ensu­re that 
    • a gene­ral under­stan­ding of AI is achie­ved (what is it, how does it work, what do we use our­sel­ves, what are the risks, etc.),
    • taking into account the cha­rac­te­ri­stics of the organization,
    • employees under­stand the rele­vant risks;
  • As a rule, this should Trai­ning cour­ses or trai­ning; the ins­truc­tions for use of the AI systems are hard­ly sufficient;
  • Even if only ChatGPT is used, suf­fi­ci­ent AI liter­a­cy is requi­red. This obli­ga­ti­on is the­r­e­fo­re risk-based to con­cre­ti­ze, but wit­hout de mini­mis thres­hold;
  • the level of AI com­pe­tence achie­ved does not have to be mea­su­red, and no cer­ti­fi­ca­tes have to be achie­ved or awarded;
  • trai­ning are not only employees, but also also exter­nal – a point that will undoub­ted­ly be found more often in ser­vice con­tracts in the future, along with other AI clauses:

    Per­sons deal­ing with the ope­ra­ti­on and use of AI systems on behalf of providers/deployers” means that the­se are not employees, but per­sons broad­ly under the orga­nizatio­nal remit. It could be, for exam­p­le, a con­trac­tor, a ser­vice pro­vi­der, a client.

  • enforce­ment is the respon­si­bi­li­ty of the natio­nal Mar­ket sur­veil­lan­ce aut­ho­ri­ties. Sanc­tions are also left to the Mem­ber Sta­tes. The mar­ket sur­veil­lan­ce aut­ho­ri­ties will as of August 2, 2026 start with enforcement.
  • Art. 4 AI Act may (of cour­se) be appli­ca­ble extraterritorially.

Fur­ther docu­ments can also be found in the “living repo­si­to­ry to pro­mo­te lear­ning and exch­an­ge about AI com­pe­tence” – the AI Office has com­pi­led examp­les of ongo­ing AI liter­a­cy mea­su­res from the par­ti­ci­pan­ts in the AI Pact and published them here. For exam­p­le, the Ita­li­an Fast­web, a Swis­s­com sub­si­dia­ry, says it has taken the fol­lo­wing mea­su­res (with fur­ther references)

  • 1) AI Gover­nan­ce Modelimple­men­ting an AI Orga­nizatio­nal Model and an AI Code of Con­duct defi­ning and docu­men­ting roles, respon­si­bi­li­ties, prin­ci­ples, pro­ce­s­ses, rules and pro­hi­bi­ti­ons for the adop­ti­on, usa­ge, sup­p­ly and purcha­se of AI.
  • 2) Accoun­ta­bi­li­tyDefi­ning roles and respon­si­bi­li­ties and for­mal­ly appoin­ting AI-SPOCs (Sin­gle Points of Cont­act), trust­wor­t­hy trai­ned advi­sors to spread AI liter­a­cy within the company.
  • 3) Com­pre­hen­si­ve AI Risk Assess­ment Frame­workImple­men­ting pro­cess and tool to qua­li­fy the risk score for every AI pro­ject, addres­sing them through appro­pria­te miti­ga­ti­on mea­su­res accor­ding to AI Act, Data Pro­tec­tion, Copy­right, Sus­taina­bi­li­ty regu­la­ti­ons, etc.
  • 4) Role-based AI trai­ning and awa­re­nessPro­vi­ding gene­ral and spe­ci­fic trai­ning on AI Gover­nan­ce and Risk Assess­ment for all employees, inclu­ding top manage­ment and AI-SPOCs.
  • 5) Mul­ti-chan­nel approachoffe­ring trai­ning ses­si­ons in per­son, online, live and off­line, main­tai­ning a Lear­ning Hub with 300+ free cour­ses on AI and sha­ring news on AI ‑risk, rules and obligations.
  • 6) Infor­ma­ti­on to affec­ted per­son: pro­vi­ding clear ins­truc­tions, infor­ma­ti­on, and war­nings for AI systems usage;
  • 7) Docu­men­ta­ti­onmain­tai­ning tech­ni­cal docu­men­ta­ti­on, poli­ci­es and templates.