The AI Act provides in Art. 3(56) and Art. 4 for an obligation referred to as “AI Literacy” (“AI competence”), which applies to providers as well as deployers and their employees as well as auxiliary persons, and not only to high-risk systems, but to all AI systems.
Art. 3(56) AIA defines what this is about:
(56) ‘AI literacy’ means skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause
How to get there is determined by Art. 4 AIA:
Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.
Because this duty is so general, because it can be tackled in parallel with other governance activities and because it has been in force since February 2, 2025, it is one of the first duties of the AI Act to come into focus. At the same time, it is rather unspecific.
Against this background, the FAQ of the European Commission on Art. 4 AEOI of May 7, 2025 welcome. The following notes are worth noting:
- Art. 4 is not a recommendation, but provides for a Mandatory before;
- However, there is some leeway in terms of content. Organizations should as a minimum but ensure that
- a general understanding of AI is achieved (what is it, how does it work, what do we use ourselves, what are the risks, etc.),
- taking into account the characteristics of the organization,
- employees understand the relevant risks;
- As a rule, this should Training courses or training; the instructions for use of the AI systems are hardly sufficient;
- Even if only ChatGPT is used, sufficient AI literacy is required. This obligation is therefore risk-based to concretize, but without de minimis threshold;
- the level of AI competence achieved does not have to be measured, and no certificates have to be achieved or awarded;
- training are not only employees, but also also external – a point that will undoubtedly be found more often in service contracts in the future, along with other AI clauses:
Persons dealing with the operation and use of AI systems on behalf of providers/deployers” means that these are not employees, but persons broadly under the organizational remit. It could be, for example, a contractor, a service provider, a client.
- enforcement is the responsibility of the national Market surveillance authorities. Sanctions are also left to the Member States. The market surveillance authorities will as of August 2, 2026 start with enforcement.
- Art. 4 AI Act may (of course) be applicable extraterritorially.
Further documents can also be found in the “living repository to promote learning and exchange about AI competence” – the AI Office has compiled examples of ongoing AI literacy measures from the participants in the AI Pact and published them here. For example, the Italian Fastweb, a Swisscom subsidiary, says it has taken the following measures (with further references)
- 1) AI Governance Modelimplementing an AI Organizational Model and an AI Code of Conduct defining and documenting roles, responsibilities, principles, processes, rules and prohibitions for the adoption, usage, supply and purchase of AI.
- 2) AccountabilityDefining roles and responsibilities and formally appointing AI-SPOCs (Single Points of Contact), trustworthy trained advisors to spread AI literacy within the company.
- 3) Comprehensive AI Risk Assessment FrameworkImplementing process and tool to qualify the risk score for every AI project, addressing them through appropriate mitigation measures according to AI Act, Data Protection, Copyright, Sustainability regulations, etc.
- 4) Role-based AI training and awarenessProviding general and specific training on AI Governance and Risk Assessment for all employees, including top management and AI-SPOCs.
- 5) Multi-channel approachoffering training sessions in person, online, live and offline, maintaining a Learning Hub with 300+ free courses on AI and sharing news on AI ‑risk, rules and obligations.
- 6) Information to affected person: providing clear instructions, information, and warnings for AI systems usage;
- 7) Documentationmaintaining technical documentation, policies and templates.