By David Vasella, Version 1.0, September 22, 2024
The author thanks for valuable advice Amina Chammah, Lena Götzinger, Hannes Meyle and Kento Reutimann (all Walder Wyss), and for fruitful discussions David Rosenthal (Vischer).
we are grateful.Overview
The applicability of the AI Act and the definition of the roles of provider and deployer – there are also other roles – can be illustrated as follows:
In addition to the legally defined terms (→ 8), these FAQs use the following abbreviations:
AIArtificial intelligence
AIA AI Act (AI Regulation). References to articles without other information refer to the AIA
AISAI system (AI system)
FOSS Free and Open-Source Software (free and open-source license)
GPAIGeneral-Purpose AI (AI with a general purpose)
GPAIM: General-Purpose AI Model (AI model with general purpose)
GPAISGeneral-Purpose AI System (AI system with general purpose)
HRAISHigh-Risk AIS (AI system with high risks)
QMSQuality management system
RMSRisk management system
The “Regulation (EU) 2024/1689 of 13 June 2024 laying down harmonized rules on artificial intelligence and amending […]”, the Artificial Intelligence Regulation, AI Regulation, AI Act or AIA) is the comprehensive regulatory framework with which the EU (or the EEA, the AIA is of EEA relevance) regulates the use of AI systems (AI systems, AIS).
An English-language online version of the AIA with a non-binding classification of the recitals can be found at data lawas well as a PDF version.
It is a regulation, so like the GDPR it is directly applicable. However, the competent authorities will be able to specify and amend some points (→ 51).
In terms of substance, the AIA first defines its material and geographical scope of application and lays down rules for the development and use of AIA, especially for “High-Risk AI Systems” (HRAIS) and for AIS with a general purpose (i.e. use case-agnostic, broadly applicable AIS – so-called “General-Purpose AI”, GPAI; see → 39). Certain particularly undesirable practices (use cases) are also prohibited (→ 27).
The academic review of the AI Act is underway, but is still in its infancy. From the general Swiss literature, reference should be made (incompletely) to the following articles:
Rosenthal, The EU AI Act – Regulation on Artificial Intelligence, Jusletter of August 5, 2024 (https://dtn.re/tLrdFm)
Arioli, Risk Management under the EU Regulation on Artificial Intelligence, Jusletter IT of July 4, 2024 (https://dtn.re/7iE4zb)
Houdrouge/Kruglak, Are Swiss data protection rules ready for AI?, Jusletter of November 27, 2023 (https://dtn.re/KvghSt)
Miller, The EU Artificial Intelligence Act: A risk-based approach to the regulation of artificial intelligence, EuZ 1/2022 (https://dtn.re/PafzEb)
Special literature can be found in particular on copyright issues in connection with generative AI (e.g. Thouvenin/Picht, AI & IP: Recommendations for legislation, application of the law and research on the challenges at the interfaces of [AI and IP], sic! 2023, 507 ff.), to Liability issues (e.g. Quadroni, Künstliche Intelligenz – praktische Haftungsfragen, HAVE 2021, 345 ff.), to labor law topics (e.g. Wildhaber, Künstliche Intelligenz und Mitwirkung am Arbeitsplatz, ARV 2024 1 ff.).
Further information can be found on an ongoing basis at www.datenrecht.ch and on the blog of Vischer (https://dtn.re/BAG7Il).
The following works from the foreign legal literature should be mentioned in particular:
Voigt/Hullen, Handbook AI Regulation FAQ on the EU AI Act, 2024 (Kindle E‑Book: https://dtn.re/bIwQg3)
Wendt/Wendt, The New Law of Artificial Intelligence, 2024 (Kindle E‑Book: https://dtn.re/kFmWjk)
Reference should be made to the non-legal or non-primarily legal literature:
Gasser/Mayer-Schönberger, Guardrails: Guiding Human Decisions in the Age of AI, 2024, a discussion of frameworks (laws, norms, and technologies) for decision-making, the challenges of digital decisions, and possible principles for guardrails (Kindle e‑book: https://dtn.re/nYx3pm)
Strümke, Artificial Intelligence (Kindle E‑Book: https://dtn.re/eOI7vU); a fairly comprehensive and readable introduction to the history of the area, technical issues, risks and weak points and speculations on further development.
The European Commission presented its proposal on April 21, 2021 (Proposal of the European Commission of April 21, 2021, https://dtn.re/JSQJtF)), with stricter regulations on transparency and traceability being a particular concern. The regulation of AI models that are suitable for widespread use (“General-Purpose AI Model”, GPAIM; at the time often referred to as the “Foundation Model”) was already the subject of intense debate at the time.
In the following Trilogue negotiationsthe informal negotiation procedure, in which representatives of the Parliament, the Council and the Commission seek a compromise, the issue of GPAI remained a point of contention until the end, when a compromise was reached on December 9, 2023. This course of events explains the separate and remarkably brief regulation of GPAI in Chapter V (→ 39 ff.).
On May 21, 2024 the Council approved the outcome of the negotiations. The AI Act was adopted on July 12, 2024 published in the Official Journal of the European Union (OJ L, 2024/1689, https://dtn.re/0OYJXY).
The AEOI entered into force on August 1, 2024, 20 days after its publication in the Official Journal. Its provisions will take effect gradually (Art. 113):
February 2, 2025Chapters I and II (general provisions and prohibited practices) take effect.
August 2, 2025Certain requirements, including reporting obligations and sanctions, become effective. This concerns the provisions on notifying authorities and notified bodies (Chapter III Section 4), the requirements for GPAIM (Chapter V), governance in the EU (Chapter VII) and sanctions (Chapter XII) as well as the provisions on the authorities’ duty of confidentiality (Art. 78);
August 2, 2026Most of the provisions take effect, especially those for HRAIS, with the following exception;
August 2, 2027The provisions for HRAIS also apply within the scope of Art. 6 (1), i.e. for AIS that are installed as a safety component of a product in accordance with Annex I or used as such.
Yes, a few, according to Art. 111:
In principle, the AIA will not apply to HRAIS operators until August 2, 2030, when the HRAIS placed on the market or put into operation before August 2, 2026 were made. However, we reserve the right to make significant changes at a later date.
Provider of GPAIM are only subject to the AEOI from August 2, 2027 if the GPAIM was placed on the market before August 2, 2025.
Art. 111 provides that AIS, which are used as components in Large-scale IT systems in the public sector in accordance with Annex X only have to be compliant by the end of 2030. This concerns the Schengen or Visa Information System and similar systems.
The term “artificial intelligence” (“AI”) refers to the behavior of a computer that is not and cannot be intelligent, but looks like intelligence from the outside. A definition from the European Parliament also goes in this direction: “Artificial intelligence is the ability of a machine to imitate human abilities such as logic and creativity”. For example, the well-known Turing test because a person can no longer recognize whether the person they are talking to is human or machine.
The distinction between artificial intelligence and determined systems is therefore not qualitative, but ultimately quantitative. Artificial intelligence is what it looks like because a machine arrives at a result that was not determined by a human being – or appears to be so: Determined are also complex systems – they only appear intelligent because their result is surprising, which is due to the fact that a machine decision is not possible due to the particular complexity and lack of access to training data factual is not comprehensible in all respects. This also makes it difficult to interpret the concept of the AI model under the AIA (→ 13).
The AEOI defines a total of 68 terms in Art. 2. They are subsequently used in the AEOI without any explicit reference being made to the definition in the relevant article – you therefore often have to return to Art. 2 when reading, especially as terms are also legally defined for which you would not necessarily expect this (e.g. “risk” or “widespread violation”).
To make matters worse, some terms are used in German (“Inverkehrbringen”) and others in English (“Provider”, “Deployer”). A comparison of the German and English equivalents can therefore be found in the appendix to this FAQ.
Relationships between data are mapped using statistical models. This does not mean that statistics in itself is a form of AI. Statistical methods are mathematical models that are used in both AI and deterministic approaches. However, machine learning (→ 10) and other approaches generally work with statistical methods.
An important such method is, for example, the Regression analysis. It determines the factors (variables) that are decisive for a result (or the strength of the influence of a variable on a result), which allows a corresponding forecast to be made. If the x‑axis on a diagram is the number of visitors to an exhibition and the y‑axis is the rainfall, the points on the diagram indicate the number of visitors depending on the rainfall. If you draw a line that mathematically best fits all the points (“regression line”), it explains the relationship between the axes or variables, in this case how the rain affects the number of visitors. It can also indicate how high the deviation of the data points from the target is, i.e. the error range of the line, the fluctuation range, the degree of reliability of the regression line (usually represented by “R2” or “r2”; an R2 value of 0.73 means that 73% of the data is explained by the regression line).
A linear regression is based on the hypothesis that the target value (the number of visitors) depends linearly on one variable (the rainfall), or that the market value of a property reacts in the same way to a change in land area and location. Here, a straight line is drawn through the data points, and further values (the number of visitors, the property value) can be determined on this basis. This makes simple prognostic models possible. With the non-linear regression a curved line is formed, for example, because a non-linear relationship is to be represented (e.g. if the number of visitors only falls when it rains heavily and not when it drizzles, or if sales figures fall more sharply when prices rise after a certain price – the price threshold). A determined logic is also used here.
Other statistical methods are, for example Cluster analyses. It is not about a specific linear or non-linear relationship between variables, but about quantifying relationships between data using distance or similarity measures and categorizing objects with a low distance measure into a common group. In two- or multi-dimensional data (“data clouds”), clusters have a common center of gravity, and cluster analyses are used to find these centers of gravity and assign data to the cluster whose center is closest. This can be used, for example, to assign potential borrowers to a cluster when granting loans and to allocate loan conditions on this basis.
This allows parametric from non-parametric models. In non-parametric regression, the relationship between the variables is not predetermined, but is first derived from existing data according to various criteria, e.g. for modeling economic data, investigating pollutant concentrations or forecasting share prices. Parametric statistics, on the other hand, presuppose that the data used corresponds to a specific statistical distribution characterized by a fixed number of parameters.
From a technical perspective, AI is the branch of computer science that deals with the development of corresponding systems. The most important technology in this field is “Machine learning„, „Machine Learning” or “ML”. It is not a synonym for AI because ML essentially serves to recognize patterns and derive predictions based on them, while AI attempts to solve a task.
ML is intended to enable a computer to “learn” on the basis of data, i.e. to derive knowledge from data. However, “knowledge” is the wrong term. The old distinction between deduction and induction is important here. deductive conclusions a rule given as true is applied, and the results derived from the rule can be considered as true as the rule itself (rule: all fish can swim; input: Wanda is a fish; result: Wanda can swim). There is also the Abduction. This is because headaches can have many causes; the conclusion would therefore be inadmissible. Abduction therefore works through several possible causal chains to find the most probable cause. Such systems are common; a well-known example is the “CADUCEUS” diagnostic system. With inductive conclusions on the other hand, an assumed rule is inferred from information. Machine learning often proceeds inductively: Statistically based statements are generated from data, which are more or less convincing hypotheses, but cannot claim to be true or objective. However, the boundaries are fluid because these approaches can also be combined.
ML therefore enables a machine to observe data and use it to generate predictions or hypotheses that are more or less probable, i.e. that fit the input data more or less well. The explanation of the hypothesis formed in this way – e.g. the conclusion of a correlation to causality – is outside of ML; it is a form of heuristics, not ML. This is why ML often relies on large amounts of data: The stretched patterns, the relationships between data, only become observable in the mass.
Recognizing patterns means generalizing. The better a model – models are mathematical functions – can generalize, the more powerful it is. As mentioned, training is used for this purpose. If the training uses too little data, it cannot draw reliable conclusions – this is known as underfitting or “underfitting”. Conversely, a model can learn the input data too well, in extreme cases it learns it by heart. Then it fits the input data, but is unable to generalize, like a person who has a good memory but does not think. This overfitting is called “overfitting”. For the training of an ML, validation and test data sets are therefore used in addition to the training data in order to reduce both overfitting and underfitting and to improve or at least estimate the validity – the reliability of the generalizing hypothesis.
ML uses statistical models (→ 9), e.g. linear regression primarily for supervised learning or cluster analyses for unsupervised learning. ML can use both parametric and non-parametric methods. Parametric models in ML use a fixed model structure, but the values of the parameters are optimized through training. An example is linear regression, when a model learns to predict real estate prices because it learns to recognize the statistical relationships between certain parameters and prices more reliably during training. These models therefore require certain assumptions to be made about the data, but do not rule out “learning” through training. In contrast, non-parametric models in ML do not have a fixed structure or a fixed number of parameters. Examples are decision trees that are improved in the course of training. They are also statistical models that determine the best possible “split” at each node on the basis of statistical criteria.
A better criterion for distinguishing between ML and deterministic approaches is therefore the basic procedure: Deductive methods use certain basic assumptions and draw conclusions from them, while Inductive methods generate possible rules through a training process with increasing reliability. Rule generation is therefore a key factor in differentiating between deterministic and non-deterministic approaches. One example is decision trees that are not predefined but generated by training – the model determines in training those rules that best separate or explain the training data, i.e. that have the greatest informative value for a target variable (e.g. creditworthiness). These rules can be interpreted and reused. Another example is association analyses, which represent relationships in large amounts of data and generate rules that describe frequent correlations. In shopping basket analysis, for example, a rule such as “If you buy diapers on Friday evening, you also buy beer” can be generated. These rules are also explicit and can be interpreted.
Expert systems are systems that contain a knowledge base, e.g. application-specific if/then rules. The system applies rules to this knowledge base in order to derive further facts or conclusions (inference). The system can specify probabilities and may also work with imprecise information (“fuzzy logic”). One example of an expert system is the well-known “Mycin”, a system developed at Stanford University in the 1970s to support the use of antibiotics. Based on parameters such as pathogen type, disease progression and laboratory data, the system was able to use certain rules to make or prepare decisions based on probabilities and uncertainties.
While decision trees generate explicit rules, neural networks are an example of implicit rule generation. Neural networks learn complex patterns from the data, but the “rules” they apply to make predictions are hidden in the weights and activations of the neurons. Although there are no explicit “if-then” rules in neural networks, the decisions are still determined by rules learned during the training process.
The difficulty with neural networks is that the rules are often difficult to understand – they are “black box” models. Recently, however, there has been progress in explainable AI, which aims to reveal these implicit rules and make them easier to understand.
This does not yet say how ML proceeds. According to the methodology of learning, four forms can be distinguished:
Supervised learning (supervised learning), in which labeled data records are used.
Unsupervised learning (unsupervised learning), in which patterns are recognized without labeling, for example in data mining.
Semi-supervised learning (semi-supervised learning) as an intermediate form in which both labeled and unlabeled data are used.
Reinforcing learning (re-inforcement learning), in which learning is reinforced through interaction with the environment.
It is also helpful to differentiate between symbolic and subsymbolic learning. Symbolic learning is so called because it uses symbols and logical rules to represent knowledge. One example is Decision trees (“decision trees”): Here, a structure of conditions or rules analogous to a flowchart is used or generated to draw rule-based conclusions from training data. The structure is tree-like because nodes represent decisions – each node corresponds to an if/then rule based on a property of the input data. The branches represent the results of applying these rules, and the leaves are the endpoints for the result, classification or prediction. Decision trees therefore work through defined decision processes. Several decision trees can be trained on different data and then together, for example by majority decision, deliver better results than a single tree, which tends to overfitting.
However, symbolic learning can reach its limits with large amounts of data. Subsymbolic learning on the other hand, uses raw data that does not need to be converted into system-compatible symbols. This approach is better suited to recognizing complex patterns in input data, but may be less transparent because the complex processes are more difficult to understand. When which form of ML is used does not depend on the area of application, but rather on whether rules are already known or are yet to be created. In the example of credit rating, a company can not only work with a decision tree; it can also try to determine correlations between losses and other factors such as age, place of residence, gender, purchasing behavior, household size, etc. using a form of sub-symbolic ML. The observed correlations can then be used as rules for a decision tree.
Subsymbolic learning includes, for example Artificial neural networks (and deep learning as a buzzword for particularly complex networks → 11).
Neural networks are algorithms that emulate information processing in the brain in order to recognize patterns in input data. A large number of connected “nodes” are used, which together form “layers” and process (“weight”) the input data step by step, possibly over several or very many layers. In contrast to decision trees (→ 10), neural networks are connected in a more complex way because each node can be connected to several other nodes in the next layer.
In order for the network to be capable of meaningful processing, these weightings must be set correctly. Accordingly, decision-making in neural networks takes place as distributed and continuous processing from a receiving “Input Layer” via interposed “Hidden layer” to the output level, the “Output Layer”; whereby the network learns by adjusting the weights between the nodes. Decision trees, on the other hand, work with explicit conditions (“if A > X go left, otherwise go right”). Each node makes a decision that leads to a specific branch, which is why the decision paths are always completely comprehensible. Deductive systems are therefore more of a “white box”, while inductive systems are a “black box”.
The factors of the mentioned weighting of the nodes in the network are improved through training by comparing the output of the network with an expected result. If there are deviations, the weights are adjusted using further training data – and so on.
This training can be carried out in different ways:
At supervised learning both input data and the desired outputs are made available to the network. The network thus learns to map the relationship between input data and output. An example is an input of animal photos when the network is simultaneously provided with a data set of appropriately labeled images of dogs and cats (“labels”). The network compares the predictions from the input data with the labels and adjusts the weightings until prediction errors are minimized. This is a common procedure for data classification, e.g. for image classification, spam filters (learning by marking e‑mails as spam) or the prediction of real estate prices (= the labeled data) based on information about the size, location and features of the property.
At unsupervised learning the network receives input data, but no labels. It must therefore independently recognize patterns and structures in the data by grouping similar data points or reducing data to certain relevant characteristics. This approach is suitable for data exploration, e.g. for customer segmentation (grouping based on purchasing behavior without predefined categories), the recognition of unusual transactions without a definition of “unusual” or the recognition of topic clusters in a large text collection.
Semi-supervised learning combines supervised and unsupervised learning – both (a few) labeled and (a lot of) unlabeled data are used for training. The labels make it easier to recognize patterns. When labeling data is too time-consuming, this approach can be useful, for example, when a smaller number of labeled X‑ray images are used with a larger amount of unclassified images to improve diagnostic accuracy, when classified product reviews are used with unclassified reviews to determine sentiment in new reviews (“sentiment analysis”), or in speech recognition when transcribed audio recordings are combined with speech data to improve recognition accuracy.
At reinforcement learning (“reinforcement learning”), the network interacts with an environment and “learns” – adjusts weights – through rewards and punishments. It is an interactive trial and error approach that is used, for example, to train an agent in games such as chess or Go (learning through repeated play), in robot navigation (learning through navigation in an environment) or in energy management (learning by adapting power distribution based on consumption patterns).
Like other ML models, neural networks also form rules. However, these rules are not explicit, unlike in an association analysis, for example (→ 9). Nor do they want to be – the aim is not to find rules, but to produce an output that applies rules but does not apply them.
(“black box”). For example, a decision tree forms a explicit rulewhile neural networks implicit rules These rules are hidden in the activations and weightings of the “neurons”. The problem with neural networks is that the rules are often difficult to understand.
However, there are approaches to reveal implicit rules. For example, “saliency maps” visualize which components of the input contributed most to the decision (e.g. by highlighting the image area that was decisive for the classification), and “Local Interpretable Model-agnostic Explanations” (LIME) work in a similar way – they use simple models such as linear regressions in parallel with the use of the neural network and can provide comprehensible explanations (e.g. that words such as “free” are decisive for the classification of an email as spam).
A Large Language Model (LLM) is based on a neural network (→ 11) and “understands” language. Well-known examples are the GPT models from OpenAI, Gemini from Google, LLaMA from Meta, Claude from Anthropic, Command from Cohere, Grok from X, the models from Mistral, Ernie from Baidu or Falcon from the Technology Innovation Institute in Abu Dhabi.
At Training of an LLM a distinction can be made between prior preparation and the actual training.
Within the framework of the Preprocessing the training data (e.g. texts from books, websites, forums, Wikipedia, etc., now also based on corresponding licenses from major publishers such as the NY Times; for training → 36) are cleaned up. For example, irrelevant or incorrect content or spam is removed, in some cases superfluous symbols and stop words such as “the”, “the” etc.).
A “tokenizer” then breaks texts down into smaller units (the Tokens), depending on whether it is a word, a single character or a word component. The latter applies to OpenAI, for example, where a variant of “byte-pair encoding” is used, in which the most frequent character pairs are combined into new tokens based on individual characters, as a result of which the vocabulary grows successively and more frequent words or components are used as a whole. Homonyms such as “bank” can be stored as several tokens depending on the context (“money in the bank”, “sitting on the bank”).
However, the tokens have no significance on their own – they are only interesting in their Relationship to other tokens. These relationships emerge from the input data during training and can be conceptually expressed as proximity or distance values. For example, the word “house” has a greater proximity to the word “roof” than the word “damage”, the token “big” has a greater proximity to “kind”, etc. Corresponding values are therefore assigned to each token. These values are the VectorsIn general, a vector is an ordered list of numbers arranged with a certain dimensionality in a certain order. In the context of an LLM, a vector is the value of a token in relation to other tokens. The learned vectors are called “Embedding” – embeddings are therefore an expression of the structure or properties of data.
„Dimensionality” means the number of numerical values of the vector. These numbers express the properties of a token. A vector with a dimensionality of 768 therefore means a series of 768 numbers, each representing a specific learned feature. The higher the dimensionality, the finer the recorded differences in meaning. The GPT‑3 model from OpenAI has a dimensionality of 768 to 12,288, depending on the variant. The value for GPT‑4 is not known, but is presumably similar. Each token therefore receives up to 12,288 properties during training.
Trained models can then be further trained for specific application areas on a specific, smaller data set (“Fine tuning”), e.g. through medical data, technical documentation, legal texts or material from a specific company. The model is trained on this data in such a way that it refines the skills it has learned without unlearning them. The parameters of the model are easily adapted – for example, the model learns technical terms, certain formulations or typical sentence structures. One example is the EDÖBot from datenrecht (https://edoebot.datenrecht.ch/), which is based on a model from OpenAI but has been further trained with data protection material.
The performance can also be measured by “Retrieval-Augmented Generation” (“RAG”) can be improved. Here, an LLM is combined with external sources of information, i.e. information outside the model is included in the query, e.g. more up-to-date or more specific information that was not learned in the training. A search component (“Retriever”) searches an external database for relevant data during the query, the Generator uses this data to provide a better response. This is also used by the FDPIC bot, which can, for example, access the dispatch on the current DPA or the FDPIC’s guidelines.
During the negotiations (→ 4), this central point – which determines the material applicability of the AEOI – was one of the particularly contentious issues, and it cannot be said that the outcome was a success. The Commission’s draft of April 2021 (https://dtn.re/dzZqxl)angelehnte Definition.
The AIA now defines an AIS as follows (Art. 3 No. 1 and Recital 12):
“AI system” means a machine-based system that is designed to operate with varying degrees of autonomy and that, once operational, can be adaptive and that derives from inputs received for explicit or implicit goals how to produce outputs such as predictions, content, recommendations or decisions that can affect physical or virtual environments;
So it’s about
“a machine-based system” (i.e. not a biological system, for example – transplanting a brain would therefore not be placing an AIS on the market),
designed for varying degrees of autonomous operation and
which can be adaptable once it is operational and
that derives from the inputs received for explicit or implicit goals how to create outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments”.
As a result, two elements are decisive, but they ultimately merge into one:
The system is designed for a autonomous operation interpreted. According to Recital 12, this means that it is not “based exclusively on rules defined by natural persons for the automatic execution of operations”, but that it “acts to a certain extent independently of human intervention and is capable of operating without human intervention”; and
it can derive an output from input, whereby this is not just any derivation, but “learning, reasoning and modeling processes” (Recital 12; “Inference”).
However, this leaves open the question of what is meant by the necessary autonomy in the company.
The basis for an AIS is primarily Machine Learning (ML; Q10) (Recital 12: “machine learning, whereby data is used to learn how certain goals can be achieved”). It would be obvious to use the aforementioned distinction between deductive and inductive models (→ 10) and to understand AIS as ML which, in contrast to deterministic statistical models, does not proceed deductively, i.e. which does not apply predefined rules, or not only predefined rules, but defines rules or at least learns to weight predefined parameters. An AIS would therefore be, for example, a model that learns from training data how strongly the land area as a given parameter affects real estate prices, and no AIS would be a model that applies defined parameters and weightings to new data – e.g. a simple Excel with a corresponding formula.
However, the distinction is not that clear. According to recital 12, the AIA also covers “logic- and knowledge-based concepts derived from coded information or symbolic representations of the task to be solved” as AIS. This applies to the example mentioned above: The Excel for calculating real estate prices is a logic- and knowledge-based concept that makes deductions from the coded task (the Excel formula) (calculates the real estate prices depending on the input data). Whether this concept, i.e. the Excel formula, is based on training is in itself irrelevant because Excel does not learn during use. If only the distinction between deductive and inductive approaches were made, all these systems would fall outside the definition.
In any case, it cannot be a question of the model during operationafter commissioning, for two reasons: First, the adaptability element is not mandatory under the wording of the provision, but illustrative. Secondly, trained models would otherwise not be covered by the AIA, and this applies to the vast majority of systems used, including widely used LLMs, which is of course not the intention. However, a trained system is not really autonomous in operation – it processes the input data according to its parameters, which may have been learned in a training phase but, as mentioned, no longer change (until an update and subject to the exceptional case that a system continues to be trained in operation, as may be the case, for example, with anti-fraud systems). From this perspective, most systems are deterministic, not autonomous.
You cannot rely on the Development phase look. During development, a model can learn, and because the goal of the learning process is only described functionally (e.g. reliable classification of images, generation of a meaningful text), but not technically, the learning process is not determined on a technical level (how the parameters are to be set so that the learning goal is achieved is not specified, hence the training). However, the wording of Art. 3 No. 1 explicitly refers to “operation” and not training – training is addressed in the AI Act (→ 36), but not in the definition of the AI system, and unlike testing, it is not mandatory. The required autonomy can therefore not only be sought in training.
This still leaves open what is meant. The OECD published an accompanying memorandum for its parallel definition of the “AI system” in March 2024, which is somewhat clearer on the required autonomy:
AI system autonomy (contained in both the original and the revised definition of an AI system) means the degree to which a system can learn or act without human involvement following the delegation of autonomy and process automation by humans. Human supervision can occur at any stage of the AI system lifecycle, such as during AI system design, data collection and processing, development, verification, validation, deployment, or operation and monitoring. Some AI systems can generate outputs without these outputs being explicitly described in the AI system’s objective and without specific instructions from a human.
Autonomy in operation therefore does not refer to the function of the system as such, which, as noted, is usually determined, but to what it does with input data: A system is autonomous if it can work according to the input without human intervention and generates an output that is not explicitly predetermined. The non-deterministic aspect is therefore to be found in data processing and refers to the relationship between input and output.
It can be argued that there is no real autonomy here either. If the system is trained, its data processing is determined by the parameters of the system. The same input must generate the same output, unless a random function is built in. This is often the case, e.g. in the OpenAI models (this factor can be controlled to a certain extent with the temperature setting), but even a random generator is basically determined (non-deterministic generators provide different values under the same initial conditions, but because the software is deterministic in itself, an external factor such as radioactive decay must be included for randomization, and this factor obeys natural laws).
However, the AIA is a law with a purpose and not a natural philosophical consideration. Accordingly, it must be interpreted functionally, in particular with regard to the legal consequences that should cover the circumstances for which it is designed. As an interim result, the necessary autonomy must therefore be defined at the level of the Data processing from the input to the output, and this must be such that the result, in a normal human view does not appear to be determined.
Ultimately, this is a form of the Turing test (→ 6): The AIA records a system as an AI system if it looks like AI. The rule of thumb of the Austrian data protection authority also goes in this direction (→ 1):
Put simply, these are computer systems that can perform tasks, that normally require human intelligence. This means that these systems can solve problems, learn, make decisions and interact with their environment in a similar way to humans.
An AIS is therefore a system which, when in operation, generates an output from a variety of different sources, a priori given options, without the selection being purely random and without following direct human guidance, and which therefore fulfills a task in which a person would have to think. This also explains the distinction from a determined system: a person who is told in detail how to proceed no longer has to think. Accordingly, it will not be possible to say with certainty whether all systems fall under the AIA or not.
AIS are for example:
Chatbots
Recommendation systems for streaming services
Voice assistants that learn through user interaction
autonomous vehicles that adapt their driving style based on sensor and environmental data
Facial recognition systems whose accuracy is improved through use
ML-based translation tools
Fraud detection systems at banks that learn to recognize suspicious patterns
Diagnostic systems in the healthcare sector
personalized learning platforms (even if they generate repetition intervals based on learning success)
Spam filter
A non-deterministic approach remains a prerequisite in each case. However, pure if/then logic is not sufficient, e.g. a music streaming service suggests Megadeth to all Metallica listeners – such logics are deterministic, a learning or inference component is missing (assuming that the system has not come across this correlation itself).
No AIS are for example:
Excel calculations, but with the proviso that an Excel document could also be programmed into an AIS
Databases such as MySQL that provide information on request
Image processing software, insofar as it is deterministic, i.e. does not generate images and is not based on an LLM
Mail clients that move emails to folders according to fixed rules
the browser with which ChatGPT is used
Spam filter based solely on white/black lists
deterministic software generated by or with the help of AIS (this is likely to affect a large proportion of software today if development is AI-supported, e.g. when using Github Copilot)
Various other application examples can be found in the Algorithm Watch atlas (https://dtn.re/ggJqKy).
An AI model is also not an AIS, i.e. basic technology that has not yet been applied to any area of application (→ 39).
AIS can a product yourself (e.g. an AIS to assess the suitability of job applicants), or it can be used as a “embedded system” or “embedded AI” can be part of another product (e.g. a control system). In the case of control systems, the corresponding product therefore does not become an AIS as a whole, as follows from Art. 6 para. 1 and Art. 25 para. 3 – it remains subject to the corresponding product regulations, but the “embedded AIS” becomes an HRAIS through installation, provided the product falls under Annex 1 (→ 28). Only when the product manufacturer makes the AIS component available on the market or puts it into operation in its own name does it become the provider of the HRAIS (Art. 25 para. 3). In the conformity assessment, the control system will nevertheless have to be assessed in the context of the overall system. For other AIS, on the other hand, a division is only possible if the AI component can be clearly distinguished from other components (e.g. in a recruiting system that clearly separates an AI module for applicant ranking from the management of the applications considered).
The qualification as AIS as such does not say anything about the associated risk – not least because the risks do not arise from the technology, but from the conditions of its use. The AI Act divides AIS use cases into four categories, even if not expressly verbalized (→ 16). The AIS also recognizes the GPAI, the regulation of which was a pièce de résistance during the negotiations (→ 39 ff.).
No. First of all, the EU can only regulate within its mandate, i.e. only activities within the scope of Union law. This excludes activities of Member States that affect national security. Certain AI systems are then excluded from the AEOI (Art. 2):
AIS, which exclusively military purposes and the national security (Art. 2 para. 3), whereby the AEOI incorporates the limits of EU law;
AIS, which is used exclusively for Research purposes is developed and used (Art. 2 para. 6) so that the freedom of research is not impaired (AIS whose possible uses only include research are, however, covered by the AIA; Recital 23);
AIS, the Private individuals for non-commercial purposes (Art. 2 para. 10; e.g. the private use of ChatGPT for planning a wedding reception);
FOSS (Art. 2 para. 12), i.e. free and open source software (or models), provided that open distribution is permitted and users may use, modify and redistribute the model free of charge, and with the proviso that FOSS remains covered if it is a HRAIS (→ 28), if it or its use constitutes a prohibited practice (→ 27) or if it interacts directly with users or is used for the generation of content (Art. 50 → 37);
AIS during the Research, testing and development phase before placing on the market or putting into service, except for tests under real conditions (Art. 2 para. 8). However, providers of AIS must of course also comply with the requirements during these phases, or rather prepare for compliance.
Despite its name, the AIA is neither a comprehensive regulation of artificial intelligence nor market conduct law, but product safety law. It is based on the established principles of product regulation in the European single market, particularly in the “New Approach” regulations.
The “New Approach” (or “New Approach”; see Commission Communication COM(2003)0240 of 2003, https://dtn.re/0mGegd)) is a concept introduced by the EU in the 1980s to regulate the internal market: instead of issuing detailed technical regulations, the EU defines basic requirements for products as a prerequisite for market access. More detailed requirements are then developed by European standardization organizations (e.g. CEN, CENELEC or ETSI). These standards are not mandatory, but compliance with them establishes the presumption of conformity of the corresponding products (in the AIA: Art. 40.
Proof of conformity is then provided in the Conformity assessment procedurewhich the manufacturer carries out himself (self-certification) or has carried out by an independent notified body. This assessment must be carried out before the product – i.e. the AIS – is placed on the market, i.e. before the risk of an AIS can manifest itself.
The CE mark indicates that the manufacturer has checked the conformity of the product, that the applicable conformity assessment procedure has been completed and that the requirements have been met. Further information can be found in the Blue Guide of the EU Commission, the Guide for the implementation of the EU 2022 product regulations of June 29, 2022 (https://dtn.re/hrqXlb).
The AI Act takes up this approach, but with a few special features:
The AIA does not regulate a technology, but its use. However, it requires compliance with basic requirements to all HRAIS, in accordance with Art. 8 – 15. Specific use cases are defined by selective bans (→ 27) and by the criteria for classification as HRAIS (→ 28).
The allocation of obligations is based on the different roles of the actors along the value chain (→ 20 ff.). What the “producers” are in the New Approach, the providers are in the AIA, and the “users” are the operators.
In principle, the provider (→ 20) of the HRAIS must have a Conformity procedure unless an exception applies due to special public interests (Art. 16 lit. f and Art. 46). The conformity assessment procedure is specified in Art. 43. For HRAIS in the area of biometrics (Annex III No. 1), the provider can choose whether to carry out self-certification (internal procedurewhich sets out Annex VI) or a notified body (Art. 29 ff. → 56) (External procedurewhich is set out in Annex VII).
For self-certification to be permissible, harmonized standards (Art. 40) or common specifications (Art. 41), i.e. harmonized specifications of the essential requirements or their implementation, must be available for all aspects of the HRAIS. If these are missing, the provider can only go through a notified body (Art. 43). For the other high-risk use cases according to Annex III, the self-certification procedure generally applies (Art. 43 para. 2), and for HRAIS that fall under product regulation according to Annex I Section A (e.g. medical devices), the applicable procedure also applies for the conformity assessment according to the AEOI (Art. 43 para. 3).
For each HRAIS, the provider must EU Declaration of Conformity and keep it for 10 years after the HRAIS has been placed on the market or put into service for the attention of the authorities (Art. 16 lit. g and Art. 47). With the declaration of conformity, he expresses that the HRAIS complies with the relevant requirements and that he is responsible for it (Art. 47 para. 2 and 4). The declaration of conformity must contain the information specified in Annex V and be translated into a language that is “easily understandable” for the competent national authorities (Art. 47 para. 2).
The provider must CE mark (Art. 16 lit. h and Art. 48). By doing so, he indicates that he assumes responsibility for conformity with the requirements of the AIA and any other applicable product requirements (Art. 30 of the Market Surveillance Ordinance, https://dtn.re/h4EI0Y).
Placing on the market and putting into service are not permitted until the conformity assessment has been completed, and a new conformity assessment is required if the HRAIS is substantially modified (Art. 43 para. 5).
Insofar as a provider sector-specific product regulation, the requirements of the AEOI must generally be covered within the correspondingly specified framework.
In addition, HRAIS must be published in a public Database be registered (Art. 49).
HRAIS are not prohibited – in this respect, the AIA is very innovation-friendly. Only a few areas of application or use cases that have been assessed as particularly undesirable for society are prohibited (→ 27).
Conversely, however, parallel requirements, conditions and restrictions must be observed, e.g. those relating to data protection, fair trading, labor or intellectual property law. The AEOI contains hardly any permissions in this regard, with one exception for data protection (→ 1).
The AIA distinguishes between different levels or classes of risk in regulation. The decisive factor here is primarily the specific use of an AIS and not its technical characteristics as such or the data used for training or during use or other criteria that could also be used for risk classification. This differentiation makes sense in principle; however, it is rather rough and cannot always do justice to the specific circumstances, analogous to the legal classification of certain personal data as particularly worthy of protection. The AEOI recognizes four risk levels for AIS: unacceptable risk, high risk, limited risk or transparency risk and everything else:
Forbidden AISAIS or use cases with unacceptable risks are generally prohibited as a “prohibited practice” (Art. 5 → 27).
HRAISAIS or use cases in sensitive areas such as critical infrastructure, education, employment, essential public services or law enforcement; they are subject to the requirements that make up the main part of the AIA. Art. 6 regulates the classification of an AIS as HRAIS (→ 28).
AIS with transparency risksThese are AIS that are not HRAIS but are intended for direct interaction with natural persons, that generate content or that are intended for emotion recognition or biometric categorization (Art. 50 → 37). Limited requirements apply here, which are primarily aimed at transparency.
Other AISFor all other AIS, the AIS only contains marginal specifications (→ 38).
The obligations of a risk class also apply to the higher classes.
Prima vista, the AIA defines a fifth risk category: AIS that “pose a risk” according to Art. 79. These are AIS with special risks according to Art. 3 No. 19 of the Market Surveillance Ordinance (https://dtn.re/JgakBQ)) i.e. atypically increased risks to health or safety or fundamental rights. It does not have to be an HRAIS, even if this should generally be the case. If a market surveillance authority (→ 43) has reason to believe that such risks exist, it examines the AIS in question and – if the assumption is confirmed – informs the competent national authorities. Operators also have special obligations with such a system, but now only if it is an HRAIS.
However, the requirements for such AIS do not increase materially; it is only a matter of a special check and, if necessary, the enforcement of compliance. Such AIS therefore do not form a separate risk category, and unless they are also an HRAIS, which is likely to be the case in most cases, there are hardly any requirements.
GPAIMs do not fall into these risk classes because they do not have a specific area of application that could be classified accordingly. Only when they become a GPAIS do they fall into a risk class as an AIS.
The AIA defines several roles that entail different duties and responsibilities in relation to AIS – and in some cases also to GPAI. It follows the standard of European product safety law with the distinction between supplier, operator, importer and distributor, but also recognizes roles:
Provider (Provider/AIS and GPAI): The entity (i.e. the natural or legal person) that places an AIS on the market and bears the main responsibility for compliance with the requirements (→ 20);
Operator (Deployer/AIS): The location that deploys an AIS or a GPAI (→ 21);
Importer (Importer/AIS): The entity that imports an AIS or a GPAI of a third country provider into the EU for the first time (→ 23);
Retailer (Distributor/AIS): The entity that offers an AIS on the Community market without being a supplier or importer itself (→ 24);
Product manufacturer (Product Manufacturer/AIS): The entity that manufactures a product in which an AIS is installed;
Authorized representative (Representative): According to Art. 3 No. 5, this is a body in the EU that has been authorized in writing by the provider to fulfil the obligations set out in this Regulation or to carry out procedures on its behalf. Representatives have the monitoring and cooperation obligations under Art. 22.
Person concernedThe data subject is not legally defined, but it concerns persons whose data is processed by an AIS. They have certain rights under the AIA (in addition to the rights under the GDPR).
If an entity has several roles at the same time, the requirements apply cumulatively in each case (Recital 83). Recital 83 gives the example of a trader who is also an importer, but this is excluded by the legal definitions (a trader provides an AIS, “with the exception of the supplier or the importer”; Art. 3 No. 7). More obvious is the provider who puts his AIS into operation and is then also the operator.
In addition, the AIA defines the “Actor” (operator); this is a generic term for suppliers, product manufacturers, operators, authorized representatives, importers and distributors (Art. 3 No. 8). It is not often used in the AIA, usually only for ease of reference and without defining legal consequences for operators.
The AEOI is initially applicable in the EU. However, it will be EEA-The AEOI will then also apply to Norway, Iceland and Liechtenstein. The AEOI is currently being examined in the EEA (https://dtn.re/LxZNyE)It will only be formally adopted into EEA law following a decision by the Draft Joint Committee.
Like the GDPR, the AI Act aims to establish a certain basic protection and a level playing field within the EEA (Recital 22). It must therefore also cover certain cases with an inter-regional component. The AIA distinguishes between the individual roles in the value chain, which is why → 17 was prefixed.
According to Art. 2 and 3 (both provisions together are decisive for the scope of application), it applies as follows from a geographical and personal perspective:
For Provider (Provider):
regardless of the location of the provider, when an AIS or a GPAIM is placed on the market or put into operation in the EU (Art. 2 para. 1 lit. a); and
if the output of the system is used in the EU (lit. c → 19);
for Operator (Deployer):
if the deployer is established in the EU or is located in the EU (lit. b). “Establishment” is likely to be interpreted broadly in line with the GDPR;
if the output of the system is used in the EU (again lit. c);
for Importer (Importer): if he is established in the EU and imports an AIS (Art. 3 No. 6);
for Retailer (Distributor): if the AIS is made available on the EU market, regardless of the location of the distributor (Art. 3 No. 7);
for Product manufacturer (Manufacturer): if they place an AIS on the market or put it into service in the EU together with their product in their own name (Art. 2 para. 1 lit. e);
for (EU)Representative foreign providers (Art. 2 para. 1 lit. f);
for affected persons in the EU (Art. 2 para. 1 lit. g).
A Swiss company can therefore fall under the AEOI in particular if it:
sells an AIS in or into the EU (as developer, importer or trader),
another product sold in the EU that uses an AIS as a component,
output that is used in the EU (→ 19).
Output is described in Art. 2 para. 1 lit. c:
(c) providers and operators of AI-systems established or located in a third country where the output produced by the AI-system is used in the Union;
This certainly includes AI-generated text or an image. However, the AIA does not contain its own definition of output, in contrast to input (Art. 3 no. 33). The term is used more frequently, but in each case without a more detailed description (e.g. in Recital 12 in the definition of AIS → 13).
In some places, however, output is used in such a way that a Broad interpretation assuming a uniform use of this term (for example in Annex III No. 8 lit. b, HRAIS when used to influence an election or vote: an AIS is not covered here as HRAIS if its output does not directly affect natural persons, such as in the case of a tool for campaign organization: here, output cannot only mean the result of Generative AI). For this reason and due to the protective purpose of the AIA, it makes sense to also include AI-generated control signals under the concept of output.
The question of when output is used in the EU is therefore more important. Every spillover cannot be meant. Rather, a certain Tangible impact in the EU, which, analogous to market conduct rules, can only be concretized by means of an orientation. This is supported in particular by Recital 22, which aims to prevent circumvention, but does not want to cover just any effect in the EU, speaks of “intention” and gives the example of a constellation in which there is clearly not just a spillover:
In order to prevent circumvention of this Regulation […], this Regulation should also apply to providers and operators of AI-systems established in a third country to the extent that the output generated by that system is intended to be used in the Union.
and:
This is the case, for example, where an actor established in the Union contracts certain services to an actor established in a third country in the context of an activity to be carried out by an AI-system […]. In such circumstances, the AI-system operated by the actor in a third country […] could provide the contractual actor in the Union with the output of that AI-system resulting from that processing […].
A supplier can therefore not fall under the AIA solely due to the use of the output as long as the output is not intended for use in the EU, i.e. is used as intended in the EU. The Blue Guide (→ 15) can provide a certain degree of concretization, although it remains vague.
The area of application is broad enough as it is. If an employee of a Swiss company sends an email to a French colleague and uses AI-generated text in it, or if a presentation with an AI-generated image or a transcript transcribed by AI is sent to a recipient in the EU, this should be sufficient, unless a de minimis threshold is derived from the criterion of perceptibility, which would be applied in addition to the requirement of alignment.
However, the question must remain open for the time being – it can be assumed that the EAIB (→ 53) will propose more specifics here. However, for non-EU actors who are only operators of a HRAIS and for actors who only deal with non-high-risk AIS, this question is not quite as important as for HRAIS providers.
Due to the legal definition of the provider, the question can also be raised as to whether the use of the output alone can be sufficient at all or whether In addition, placing on the market or commissioning in the EU is assumed. However, there are several arguments against this interpretation:
Recital 22 extends the scope of application to CIS “even if they are neither placed on the market nor put into service or used in the Union”.
With regard to providers, Art. 2 would no longer need to mention the use of output in this interpretation because placing on the market in the EU alone is sufficient (Art. 2 para. 1). In the case of operators, on the other hand, the reference to output would also be justified if the provider is required to place the product on the market or put it into service.
The narrow interpretation would lead to a situation in which an operator can be subject to the AIA, but not the provider of the corresponding system. Since the obligations of the operator presuppose, at least in part, that the provider has also fulfilled its obligations (e.g. in the case of the retention of log data, which is not possible if the provider has not ensured the log capability of the HRAIS), a parallelism is more likely.
The legal definition of provider allows the conclusion that placing on the market or putting into service is only a prerequisite for provider status if an entity does not develop an AIS itself, but has it developed. In the case of self-developed AIS, the development of the AIS is sufficient according to this interpretation (→ 20).
For reasons of protection, authorities and courts will probably follow a broad interpretation, i.e. allow the output to suffice. In any case, experience with the fundamental rights-related interpretation of the GDPR supports this.
Until the issue has been clarified, it should therefore be assumed that the intended use of the output is sufficient.
However, one may wonder whether a Use as Output is required. This should be the case: Anyone who wants AI-generated texts to be used in the EU will still be able to fall under the AIA if they use a screenshot with the corresponding text. However, anyone who generates texts to illustrate how an LLM works and uses generated texts as examples and not because of their actual content will hardly be using output in the EU.
The “suppliers” have the role that “manufacturers” have in product safety law. They are the entities that develop AIS or a GPAIM (or have them developed under their control) and place them on the market or put them into service (Art. 3 No. 3):
[…] an […] entity that develops or has developed an AI system or an AI model with a general purpose and places it on the market under its own name or trademark or puts the AI system into operation under its own name or trademark, whether in return for payment or free of charge;Providers bear the Main responsibility for the conformity of the AIS, e.g. through the conformity assessment procedure, risk management, ensuring data quality during training and post-market surveillance (→ 0).
However, the wording of Art. 3 No. 3 leaves Two interpretations to:
The condition that an AIS is placed on the market or put into operation can generally apply
or only for the second case, in which an AIS is not developed in-house (“has it developed”).
At first glance, the first interpretation is more obvious. However, it is by no means unambiguous. For territorial application, it is sufficient to use the output in the EU (→ 19). It would be contradictory to waive most of the obligations because the entity concerned does not also place the (HR)AIS used on the market or put it into service in the EU. In other words, this broad interpretation of the concept of provider resolves the internal contradiction in Art. 2, as the use of the output in the EU is then clearly sufficient. This speaks in favor of the broader interpretation of the concept of provider, as is also advocated in the literature.
„Placing on the market” (“Placing on the market”; AIS or GPAIM) is defined in Art. 3 No. 9 as the process by which a specific AIS or a specific GPAIM is made available on the Union market for the first time:
This can be done once or permanently, but only once for each individual AIS or GPAIM. Anyone who makes an AIS available to a customer in the EU therefore does not become a provider if the AIS has already been placed on the market in the EU.
Placing on the market implies an offer or an agreement to transfer ownership, possession or other rights to the AIS or GPAIM, for a fee or free of charge. In the case of an AIS, this is the case, for example, when an AIS is made available for use on premise or as a SaaS offer, e.g. via an interface (API; see Recital 97 and Art. 6 of the Market Surveillance Ordinance on Distance Selling). Placing on the market is carried out by the supplier or – in the case of an AIS – an importer (see below). If they pass on an AIS to a distributor for further distribution, they are already placing the AIS on the market (the subsequent act of the distributor is then a “making available”).
On the other hand, placing on the market would not include the import by a person for their own use, e.g. a cell phone with AI applications, the handover of an AIS for purely test purposes or the demonstration of an AIS at a trade fair (see the Blue Guide, section 2.3).
The “put into operation” (“Putting into service”; AIS) is then defined in Art. 3 No. 11 as the process in which an AIS is handed over to the deployer for its first use, but also the provider’s own first use:
Anyone who develops and uses an AIS is a provider within the meaning of the AIA with the corresponding obligations.
Operators, importers, distributors or other bodies can also subsequently become suppliers (→ 22).
Because a product can be optimized by installing an AIS (“Embedded AIS”) does not itself become an AIS, the manufacturer of the corresponding product does not become a provider within the meaning of the AIA if the embedded AIS is used under the name or brand of another entity.
With a Combination of AIS each individual supplier should also be considered a supplier, provided that the components continue to be used as intended. However, because the AIA refers to “systems” and not software packages, components can probably be considered together as AIS if they form a functional unit.
Manufacturer of a regulated productwhich is subject to product regulation in accordance with Annex I because an AIS has been installed as a safety component (within the meaning of Art. 3 No. 14) and who place the product on the market or put it into service in their own name are then also deemed to be suppliers (Art. 25 para. 3).
Operators do not design the system themselves, they merely use it (Art. 3 No. 4) – according to general product safety law, they are therefore “end users”.
However, the use of the AIS “under the authority” of the operator, “on its own responsibility” (Art. 3 No. 4). This presupposes that the system is not operated solely on behalf of another operator. It is unclear whether it also requires the operator to configure, control, parameterize, etc. the AIS himself or whether it is sufficient for him to decide on its use. If one assumes the operator’s obligations and asks the question of when these obligations can apply, a lower threshold is sufficient; mere use without further control would not be a prerequisite here. According to this obvious view, “under it’s authority” means that the use is not carried out solely in the sense of order processing or by an employee, but by an entity that uses an AIS for its own purposes. Conversely, someone who uses an AIS for someone else is not an operator (but usually a provider).
The operator must comply with the Operating instructions hold (→ 35). This is essential because it determines, among other things, the intended use of the AIS, i.e. the “intended purpose” (Art. 3 no. 12) for which the AIS is intended, as well as the framework for correct use. If the operator leaves this framework, he can become a provider (→ 22).
GPAIM lacks an operator because a GPAIM cannot be operated (→ 39).
This question is less easy to answer than it initially seems. Art. 25 AIA contains the basic rule that an operator becomes a provider under certain circumstances (so-called “deemed provider”):
when he acts as a providerby affixing his name or trademark to a HRAIS after it has been placed on the market or put into service by the original supplier,
when he uses the HRAIS changes significantly (as defined in Art. 3 No. 23 AIA), but without making the HRAIS a low-risk AIS, and
if he uses an AIS outside of its intended purpose in such a way that he only uses it makes the HRAIS.
Only the deemed provider is deemed to be the provider in each case; the original provider is released from its responsibility in this respect. However, it must cooperate with the new provider (Art. 25 para. 2). He can price this accordingly. However, the obligation to cooperate does not apply if the original provider has specified that the AIS may not be converted into an HRAIS – this therefore also speaks in favor of a corresponding Contract design.
In contrast, the mere use of a HRAIS is not sufficient for classification as a provider. outside the intended use. On the contrary, the provider must expect this to a certain extent, as Art. 9 para. 2 lit. b in addition to Art. 25 shows: The provider’s RMS must also take into account the risks of foreseeable cases of misuse. Only when misuse leads to a significant change or turns an AIS into a HRAIS does the operator become a “deemed provider” in accordance with Art. 25. Anyone who uses a chatbot intended for customer support to select job applicants therefore becomes an HRAIS provider, but not when used for employee satisfaction surveys (no HRAIS).
Also a Fine tuning (→ 12) should not be sufficient to become the provider of the correspondingly further trained AIS, unless the operator offers the AIS under its own name or uses it in such a way that it becomes a new HRAIS. It remains to be seen whether the qualification of the provider in the case of fine-tuning is based on Art. 25 or simply on the undefined element of “developing” according to Art. 3 No. 3. In the latter case, the operator could rather be classified as a provider in the case of fine-tuning. However, the AIA generally uses the term “develop” in a broader sense (e.g. in Art. 2 para. 6: no application of the AIA to an AIS that was “developed” [and put into operation] solely for research purposes). In addition, Recital 93 separates the area of development from the role of the operator. Above all, however, the fact that the user can hardly fulfill the provider’s obligations in the event of a fine-tuning is likely to be significant because his control of the AIS does not go far enough. The operator of a GPAIS does not become a provider simply by using a RAG (→ 12).
At GPAIM it also applies that the model becomes a GPAIS as soon as the model is made available as a product, even if only by supplementing it with a user interface (→ 39). The above requirements then apply. Anyone who purchases a GPAIM and then puts it into operation for a specific use case is the provider of the resulting AIS.
The importer is an entity in the EU that imports a foreign HRAIS (i.e. a HRAIS offered under a foreign name or trademark) into the EU (Art. 3 No. 6).
The importer does not have to establish conformity himself, but his obligations are based on those of the supplier – in other words, he is not merely a reseller, but must
check that the conformity assessment has been carried out, the technical documentation in accordance with Art. 11 and Annex IV AIA is available, the HRAIS bears the CE mark and the supplier has appointed an authorized representative (Art. 23 Para. 1), and
retain the documentation for the attention of the supervisory authorities (para. 5).
If there is any doubt about compliance with the essential requirements, the HRAIS must not be placed on the market, and
in the case of higher risks (as defined in Art. 79 para. 1), the supplier, the authorized representative and the competent market surveillance authorities must be informed accordingly (Art. 23 para. 2).
According to Art. 3 No. 7, this is an entity that obtains a HRAIS from a supplier, an importer or another distributor and makes it available on the Union market without itself being a supplier or importer, i.e. after it has been placed on the market. “Making available” means any supply of an AIS or a GPAI for further distribution or use on the Union market, whether in return for payment or free of charge (Art. 3 no. 10).
Similar to the importer, the trader must
check that the HRAIS bears the CE mark, that a declaration of conformity and the operating instructions are available and that the supplier or importer has indicated their name or brand and has a QMS (→ 35).
If there are reasonable doubts about compliance with the essential requirements, the HRAIS must again not be provided and the distributor must contact the supplier or importer.
If defects cannot be remedied, the HRAIS must be withdrawn from the market or recalled (by the distributor, supplier or importer; Art. 24 para. 4).
In the event of higher risks (in accordance with Art. 79 para. 1), the supplier or importer and the competent authorities must be informed (Art. 24 para. 4).
This role is not legally defined either. It is an entity that manufactures a product into which an AIS is integrated. Under certain circumstances, this entity becomes a provider, namely when the AIS is a Safety component of their system, it falls under a product regulation according to Annex I and the product manufacturer makes the AIS available on the market as part of their own product in their own name or the product is put into service in the name of the product manufacturer after being made available on the market (Art. 25 para. 3). In this case, the product manufacturer must ensure that the installed AIS complies with the requirements (Recital 87).
According to Art. 22, the Provider of an HRAIS if it is established outside the EU. According to Art. 3 No. 5, an “authorized representative” is an entity resident or established in the EU that the provider of an AIS or a GPAI model has authorized in writing (i.e. probably in text form) and agrees to fulfil the obligations under the AEOI or to carry out procedures on its behalf.
The tasks of the authorized representative are to be defined in the contract, but include at least the catalog according to Art. 22 para. 3, e.g. checking whether the declaration of conformity and the technical documentation have been drawn up and the conformity assessment procedure has been carried out, the provision of certain information and documents to the authorities and obligations to cooperate in the registration of the HRAIS. Art. 54 contains an analogous provision for providers of a GPAIM (→ 39).
Authorized representatives can resign their mandate and may even have to do so.
Operator and other parties other than the provider are not obliged to appoint an authorized representative.
AIS or use cases with unacceptable risks are exceptionally prohibited as “prohibited practice”, i.e. the placing on the market, putting into service or use of an AIS for a corresponding purpose is prohibited (Art. 5):
Subliminal influence (Art. 5 para. 1 lit. a): Manipulation that unconsciously influences behavior, thereby distorting a decision and causing harm. This includes, for example, forms of deception, e.g. through “dark patterns” or “nudging”, particularly through a procedure that is so low-threshold that it is not consciously perceived, e.g. in a virtual environment (Recital 29). Intent to deceive is not a fundamental prerequisite, as deliberate deception is only one variant of the offense.
Exploiting vulnerabilityon the basis of age, disability, etc. (Art. 5 para. 1 lit. b). This also concerns the harmful distortion of decisions (Recital 29). Proportionate affirmative action is not covered;
Social scoring (Art. 5 para. 1 lit. c): Assessment of persons according to social behavior or personal characteristics over longer periods of time if persons are treated unfairly, i.e. if the use of AIS would have an unexpected or disproportionate consequence for the persons concerned. This does not include credit rating, which is not prohibited but is highly risky (→ 32);
Risk assessment for criminal offenses (predictive policing) through profiling (Art. 5 para. 1 lit. d; with exceptions);
Face recognitionCreation of facial recognition databases through broad scraping of images from the internet or surveillance footage (Art. 5 para. 1 lit. e). The comparison of an image with images on the internet, for example, would not be covered because this does not involve scraping;
Emotion recognition in the workplace or in educational institutions (Art. 5 para. 1 lit. f; with exceptions for health or safety-related concerns). Emotion recognition in other areas is not prohibited. For example, the transcription of calls with an evaluation of whether a customer advisor is sufficiently friendly or whether an employee expresses negative emotions towards the company would be prohibited. Because the AIA does not work with the defined term “emotion recognition system” in this prohibition, the recognition of “intentions” (Art. 3 No. 39) is not covered; it must be about emotions, but the basis of intention recognition can be not only biometric but also other data;
Categorization according to biometric datato infer race, political opinions, religious beliefs, sexual orientation, etc. (Art. 5 para. 1 lit. g; with exceptions). The term “biometric data” is defined in Art. 3 No. 34; it must relate to personal data. However, AIS are exempt from the ban if the categorization is only a necessary ancillary function of another commercial service for objective technical reasons (Art. 3 No. 40); for example, if an online service uses body characteristics for clothing purchases (insofar as this involves biometric data);
Real-time biometric remote identification in publicly accessible areas (Art. 5 para. 1 lit. h and para. 2 – 7; with exceptions). Authentication is not covered (→ 29).
The Commission has also issued guidelines on prohibited practices (→ 51).
These prohibitions may overlap with other prohibitions, e.g. prohibitions on deception under fair trading law or data protection restrictions. The fact that an AIS is not prohibited does not mean that it is generally permitted. Restrictions may arise, for example, from data protection and fair trading law.
AIS or use cases in sensitive areas such as critical infrastructure, education, employment, essential public services or law enforcement; they are subject to the requirements that make up the main part of the AIA (→ 15). Art. 6 regulates the classification of an AIS as HRAIS.
A distinction must be made between two cases:
The first case under Art. 6 (1) concerns AIS covered by a Product regulation according to Annex I because the AIS or its use case is itself subject to such regulation or because it was installed as a safety component (within the meaning of Art. 3 No. 14) in such a product). The focus here is on the product risk, in particular risks to life and limb. Annex I distinguishes between two categories:
The first category in Section A concerns product regulations that are New Approach follow. The AIA is directly applicable here. This applies, for example, to machinery, toys, explosives and medical devices.
The second in Section B concerns product regulations outside the New Approach. The AIA is here Not directly applicable. Instead, the corresponding legal acts in Art. 102 ff. are adapted so that the requirements from Chapter III Section 2 (Art. 8 ff., basic requirements for HRAIS) are taken into account in the sectoral decree. This concerns means of transport (aviation, rail, motor vehicles, etc.).
The prerequisite in each case is that the product or AIS as a product enables the performance of a Conformity assessment by third parties (Art. 6 para. 1 lit. b). Whether this can also cover cases in which an internal conformity assessment procedure is used is disputed.
The second case according to Art. 6 para. 2 concerns AIS that are in the Appendix III is mentioned. Annex III concerns specific areas of use; the point of reference here is therefore less a product risk than a risk of use. The following cases are listed exhaustively, each of which relates to the intended use of the HRAIS (see 29 ff.):
BiometricsUse of AIS for remote biometric identification, biometric categorization or emotion recognition (see → 27);
Critical infrastructureAIS, which serves as a security component in certain critical infrastructures (→ 31);
Education and vocational trainingAIS for managing access to educational opportunities, assessing learning outcomes or monitoring examinations (→ 30);
Employment, personnel management and access to self-employmentAIS in the area of recruiting or for relevant decisions or the monitoring and evaluation of performance or behavior (→ 30);
Basic services and benefitsAIS for assessing entitlement to public support (e.g. social insurance), creditworthiness assessment, risk and premium determination in life and health insurance or triage of emergency calls, emergency operations and first aid (→ 32);
AIS to support law enforcement authorities, in the area of migration, asylum and border control and in the judiciary and democratic opinion-forming (→ 33).
The intended use of the AIS is decisive here, whereby the intended purpose is either set by the manufacturer (Art. 3 No. 12) or then by the operator who uses an AIS outside the intended purpose (Art. 25 → Q22).
Annex III No. 1 regulates use cases in the field of biometrics. Three cases are covered:
The first case is the Biometric remote identification. This is legally defined in Art. 3 No. 41. It refers to AIS that are intended to identify persons without their involvement and generally from a distance. This does not include authentication systems for premises and devices such as iris, face, vein and fingerprint scanners (see also recital 54). However, a camera mounted above a highway would be covered if an AIS compares the images with a database.
The second case concerns the Biometric categorization of people if an AIS is intended to infer “sensitive or protected attributes” (e.g. people are categorized into ethnic groups using AI). This does not include (→ 27) cases in which the categorization is only a secondary function of another commercial service that is necessary for objective technical reasons (Art. 3 No. 40).
The third case is AIS for Emotion recognition. According to Art. 3 No. 39, these are AIS that are intended to detect or predict “emotions or intentions”, but do so on the basis of biometric data. This applies, for example, to an AIS that infers emotions from the voice – coloring, trembling, etc. A conclusion on health is also likely to be covered, in a broad interpretation. However, the basis must be biometric data. If emotions (or intentions) are assessed on the basis of e‑mails or other texts, this does not turn the AIS into a HRAIS. However, the result may be different in the workplace, where an AIS becomes HRAIS if it is used to influence decisions on working conditions, promotion, dismissal, etc., or to monitor performance or behavior (→ 30). Of course, this also applies if the input data is biometric data.
As mentioned, Annex III lists use cases that are considered high-risk (→ 28). Annex III No. 3 concerns the vocational and non-vocational (further) education:
A first use case (lit. a) are AIS, which are to be used in order to Access or admission to educational opportunities determine. “Determine” means “determine”, as can be seen from Recital 56 – an AIS whose intended use is a gatekeeper function for educational offers, e.g. in an admission or aptitude test, is therefore highly risky. This applies not only to decisions on access as such, but also to the selection of different educational opportunities. “Determining” is more than just “participating”. An AIS that makes recommendations for admission would therefore be covered by quite a lot.
A second case is an AIS that is used for the Assessments of “learning outcomes” is determined. It is therefore particularly about the assessment of examinations. However, the wording of the law goes somewhat further; the assessment of learning outcomes seems to be sufficient in itself. This would actually also cover the correction function in a language learning program, for example, if it uses an AIS, even if it is nothing more than passing a level.
The third case overlaps with the first: it concerns AIS, which is used for Assessment of the level of education which someone is to receive or to which they are admitted. Aptitude tests are likely to be the main focus here. However, it must be about education – talent management with an AI-supported assessment of suitability for another position would not be covered here (but would fall under a different use case, see below).
The fourth case concerns AIS, which is intended to be used for Audit supervision are used in education and vocational training.
It remains to be seen how far the Concept of education is to be understood. According to Recital 56, this includes “educational and vocational training institutions or programs at all levels”, i.e. school education, but probably also early and continuing education. However, internal training courses that do not serve the purpose of further education, such as compliance training, are hardly covered. An AI-supported evaluation of test questions during such training should therefore not be sufficient. However, it is a borderline case, and this is where the workplace-related use cases (see below) often come into play (in particular the behavioral and performance evaluation of an employee).
Annex III No. 4 specifically concerns the Workspace. A distinction must be made between the recruitment process and the employment relationship:
As with all use cases in Annex III, however, this use must be in accordance with the intended use. Formulating a job advertisement with ChatGPT is therefore not sufficient. On the other hand, anyone who builds an AIS that categorizes applications on the basis of an OpenAI model is operating an HRAIS. It should also be sufficient for an AIS to check applications to see how well they match a job advertisement – a form of semantic search that corresponds to a “viewing” of applications.
Decisions on working conditions, promotion and dismissal. Here it is prima vista unclear whether the AIS must make these decisions or merely influence them. The legal text implies the latter: it is a matter of using it to make decisions which then – as a consequence – influence working conditions, etc. However, the AIS does not have to make the decision itself; it is sufficient if it is intended to support a human decision on such points (the English text is clearer: “intended to be used to make decisions”, not “intended to make decisions”). The purpose of the law according to Recital 57 also speaks in favor of this interpretation (protection of career prospects and livelihoods from “noticeable influence”).
Two further use cases apply in addition. One is a Assignment of tasks on the basis of behavior or personal characteristics or attributes, the other person’s Observation and evaluation of performance and behavior. An AIS thus already becomes an HRAIS when behavior is evaluated, even if no decision on career advancement is subsequently made, prepared or influenced (even if AI-supported performance or behavioral assessments are generally designed for such decisions).
An HRAIS would therefore be an AI-supported evaluation of the performance of an employee in a call center. In contrast, an AI-supported optimization of field service routes would not be an HRAIS. In this case, the behavior of the relevant employees is evaluated. However, the evaluation does not relate to this behavior, but abstracts from it. Since career advancement is not at risk in such a case, it should not constitute HRAIS. However, if an AI-supported evaluation is subsequently carried out to determine whether a driver is following the optimal route, this would be an HRAIS. Human awareness of the result should not be a prerequisite. A driving assistant that makes suggestions depending on the route actually taken would therefore probably be an HRAIS. The same applies analogously to an AIS that is used in production to optimize processes.
This does not say what all belongs to the “work area”. The independent work may be covered, especially as Recital 57 also mentions “access to self-employment”. All of the use cases mentioned can also apply if the selection, decision, observation or evaluation does not concern a dependent employee but a self-employed person. However, an employee-like status, i.e. a certain degree of dependency and subordination, must be required; otherwise there is no corresponding need for protection.
The AIA only provides for one case here: An AIS is used (as intended) as a safety component that is used in the control or operation of a critical digital infrastructure in accordance with point 8 of the Annex to Directive 2022/2557 on the resilience of critical installations (CER Directive), https://dtn.re/D2CV56) and in the area of road traffic or water, gas, heat or electricity supply.
Annex III No. 5 regulates three further cases that are relevant in the private sector.
The first concerns AIS for the “Creditworthiness check and credit rating” of natural persons (but not legal entities). This is relatively broad because the AEOI does not define what is covered by these terms. In any case, it is not only about credit agencies and comparable providers of creditworthiness information, but also about companies that carry out corresponding assessments for themselves or for group companies (provided they are AI-supported).
However, this does not apply to AIS that are used for “Detection of financial fraud” are used. The text speaks here of “are used” and not of “are intended to be used”. This could lead to the conclusion that an AIS is not (or no longer) an HRAIS even if its primary purpose is to assess creditworthiness but it is only used to detect fraud. However, this contradicts Recital 58, which is narrower in this respect: it only excludes AIS that are “intended” for fraud prevention. At the same time, however, it is also broader: AIS that are intended under EU law to detect financial fraud or to calculate capital requirements are not HRAIS. This could be a problem for a Swiss financial services provider that uses an AIS to calculate Swiss capital requirements (i.e. not on the basis of EU law) and makes the result available to its EU parent company and therefore falls locally under the AI Act (→ 18).
An AIS also becomes a HRAIS if it is in the Insurance sector for risk assessment or premium determination, but only in the area of life or health insurance.
Another HRAIS is an AIS, which is used to triage Emergency calls or the deployment of paramedics, police or firefighters or the prioritization of first aid.
Finally, according to Annex III No. 6, AIS are also included as HRAIS if they are used for the determination of facts and the application of the law by Arbitration tribunals and mediators (in addition to the state courts → 33). This would include, for example, AIS that establish the facts from files, but always subject to subordinate support within the meaning of Art. 6 para. 3 (→ 34), e.g. “the anonymization or pseudonymization of court judgments, documents or data, communication between staff or administrative tasks” (Recital 61). AIS can also be an HRAIS in the private sector in connection with influencing elections and votes (→ 33).
Annex III contains some use cases that are only relevant in the public sector (but include companies acting on behalf of a public authority).
Annex III No. 5 concerns AIS for use by or on behalf of public authorities for the assessment of whether an entitlement to “Basic public support and services” is to be restricted or revoked. This applies, for example, to social insurance or social assistance. However, these cases are limited to application to natural persons.
Annex III No. 6 concerns various use cases in the area of Prosecutionand no. 7 in the area of Migration, asylum and border control. Point 8(a) then concerns AIS for use by or on behalf of a judicial authority (including private dispute resolution → 32) to assist in the investigation of facts and application of the law. According to lit. b, AIS are also high-risk if they are used to influence the outcome of an election or vote or voting behavior. However, this must be a direct influence – AIS instruments for the administrative support of campaigns are not covered.
Yes, in contrast to the product-related high-risk cases, it is possible to prove the lack of high risk as an exception for the use-related classifications according to Annex III (→ 28).
According to Art. 6 para. 3, this applies under two cumulative conditions:
Firstly, the Intended use of the AIS harmless because it neither entails a greater risk nor significantly influences a decision (Recital 53). This is the case if it is only intended for this purpose,
perform a “narrowly defined procedural task” (e.g. structuring unstructured data or categorizing data), or
as a mere additional layer to improve the result of a human activity (e.g. in the now common “improvement” of a text written by a human), or to recognize decision patterns or deviations from previous decision patterns (e.g. when checking whether a human test deviates from a given pattern), or
to carry out a merely preparatory task for an assessment (e.g. in the case of a translation of texts for further human use); in each case in more detail in Art. 6 para. 3 and recital 53). The EU Commission (→ 51) should propose clarifications here.
Secondly, the AIS No profiling (loc. cit.). For the term, the AIA refers to the GDPR (https://dtn.re/8YoXjh)Art. 4 No. 4.
A provider who wishes to make use of this exemption must document this assessment prior to placing on the market or putting into service (Art. 6 para. 4). It must also register the AIS in the same way as an HRAIS (Art. 49).
Not all intermediate steps in the value chain are the main trigger for obligations and requirements. In principle, AIS must meet the essential requirements at the time they are placed on the market or put into service (→ 15). From a practical perspective, however, other processes also trigger certain obligations.
These obligations can be broken down as follows, although the allocation to individual phases cannot be clearly defined, as the AEOI does not legally define all of these factually distinct stages as the starting point for obligations. Details on the individual obligations can be found in the referenced questions and answers. It should also be noted that the AEOI does not apply to the products listed in Annex III Section B, but rather the obligations adopted in the respective product regulation (→ 28).
| System | Role | Trigger | Legal consequences and requirements | |
|---|---|---|---|---|
| Provider | ||||
| 1 | HRAIS | Provider | Procurement of system components | If the provider procures components from a supplier – this will involve software components, because in a combination of hardware and software, only the software is likely to be considered AIS and the procurement of hardware therefore does not constitute incorporation into an AIS (→ 28) – it must conclude an AIS agreement with the supplier. Conclude a contract. The contract must be in writing, i.e. probably documented in text form, and regulate the essential points for the HRAIS provider (Art. 25 para. 4). The AI Office (→ 52) should provide templates here. Excluded from this obligation is the delivery of a non-GPAI under a free and open source license (FOSS), but such software providers are encouraged to provide information relevant to HRAIS providers (RecG 89). |
| 2 | HRAIS | Provider | Training | An HRAIS does not necessarily have to be trained – this is not an obligation in itself (Art. 10 para. 6; also not as a risk mitigation measure: Rec. 65), but rather a circumstance that can lead to classification as an AIS (→ 13). In the case of training, however, certain requirements apply. First of all, the question arises Which data an HRAIS should or may be trained. Art. 10 para. 3 (Data Governance) specifies requirements for this: Test data must take into account characteristics or elements that are typical for the framework conditions of the HRAIS in its intended use, i.e. meaningful. This may include the use of Personal data or even particularly sensitive personal data (→ 58), e.g. for systems that classify applications and must be trained in such a way that they have the weakest possible bias in terms of age, gender, ethnic background, etc. (some manufacturers therefore include bias audits in their customer documentation). Art. 10 para. 5 therefore contains a legal basis for the use of such data for testing and training purposes, subject to the conditions of para. 5 lit. a‑f. For the training itself, the provider must then make some decisions and document. This is set out in Art. 10 para. 2. It mainly concerns the procurement of training data, the preparation of test data (e.g. labeling, tagging, etc.), the definition of assumptions and target values, the metrics for measuring whether targets are achieved or assumptions are correct, or the avoidance of bias. Even during the training phase, the aforementioned Risk management system (RMS) is relevant for HRAIS (see Art. 9). Although, as mentioned, the provider does not have to carry out training as a risk mitigation measure, it is nevertheless invited to do so (Recital 65). In this respect, the RMS should of course also cover the training phase. |
| 3 | HRAIS | Provider | Testing | Unlike a training course, a test is a separate Mandatory of the HRAIS provider (Art. 9 para. 6). HRAIS must be tested so that the risk can be determined and mitigated if necessary. Tests must be carried out at the appropriate time, but before placing on the market or putting into service (para. 8). For tests by suppliers of GPAI models with systemic risks, see Q41. For carrying out the tests, the following Art. 9 and 10 Requirements. The requirements for training data also apply to test data (Art. 9 para. 6; this also applies to any use of personal data). According to Art. 9 para. 7, tests may also be carried out for a maximum of 12 months under Real conditions provided that the requirements of Art. 60 AIA are met. Such tests require, among other things, a separate plan, which must be approved by the competent market surveillance authority (Art. 60 para. 4 lit. a and b). |
| 4 | HRAIS | Provider | Placing on the market or commissioning | Legally, the time at which the HRAIS is placed on the market or put into operation is decisive for the Most duties of the provider. The provider must therefore take these into account when planning and designing an AIS that is potentially an HRAIS. First of all, the provider must technical documentation (Art. 11 and Annex IV). This is the core: The technical documentation serves to document compliance with the essential requirements and is therefore also the basis for the conformity assessment. In particular, it contains a description of the HRAIS, its components, its development or training, including the data used for this and the validation and relevant tests, its functioning and architecture, the guarantee of human supervision (→ 37), its control, the risk management system and the market surveillance procedure (Annex IV). Also the Operating instructions (Art. 3 No. 15 and Art. 13 para. 3) is part of the technical documentation (Annex IV No. 1 lit. h). This states and defines the intended use of the HRAIS (Art. 3 no. 15), which determines whether the AIS is an HRAIS according to Annex III (→ 32) and helps to define the provider’s area of responsibility and is an essential benchmark for the compliance requirements (cf. for example Art. 8 para. 1, Art. 10 para. 3 or Art. 26 para. 6 AIA). The operating instructions must contain precise, complete, correct, clear and comprehensible information and be provided digitally or physically, but without barriers (Art. 13 para. 3) and contain at least the information pursuant to Art. 13 para. 3 lit. a‑f. This includes, among other things, the purpose, characteristics and performance limits of the HRAIS, the measures to ensure human supervision, the lifespan of the HRAIS and information on maintenance and updates and a description of the log capability. On the basis of the technical documentation, the supplier must then check the Conformity assessment procedure (→ 15), and for each HRAIS it must pass through an EU-Declaration of conformity and keep them for the attention of the authorities (Art. 47). In addition, he must keep a physical or digital CE mark apart from the indication of his name or trademark and a contact address (Art. 16 lit. b → 15). The essential duties also include the following: - QMSAccording to Art. 17, the provider must have a quality management system (QMS) that generally “ensures” compliance with the AIA, i.e. a system of policies, processes and instructions that covers all phases of the HRAIS, including a compliance concept with responsibilities and accountabilities, information on the development, testing and validation of the HRAIS, where applicable the harmonized standards used for conformity assessment, data governance (→ 36), market surveillance (→ 43), incident handling (→ 45), communication with authorities, required documentation and resource management. The risk management system is also part of the QMS (Art. 17 para. 1 lit. g; the RMS can be managed separately, but must be covered by the QMS). - RMSThe provider must set up, apply, document and maintain a risk management system (RMS) for each HRAIS (Art. 9). The RMS must accompany the HRAIS throughout its life cycle – even after it has been placed on the market or put into service – and be kept up to date, which requires appropriate governance. In particular, risks to health, safety or fundamental rights, especially of vulnerable persons, must be continuously identified and assessed, not only in relation to the intended use, but also foreseeable misuse (Art. 9 para. 2 lit. b), and they must be adequately mitigated as early as the design and development phase, insofar as the provider can mitigate the risks (lit. d). This also includes, for example, informing or training the operator (Art. 9 para. 5 lit. c). The identified and accepted risks should then be mentioned in the operating instructions. The provider can base the RMS on corresponding standards (→ 61). - Ensure log capabilityThe provider must ensure that the system is technically capable of logging (Art. 12). Art. 12 para. 2 – 3 specifies what must be logged. - Comprehensibility of the output (Art. 13): The provider must ensure that the output of the system is clear and understandable for the operator. The operating instructions (Art. 3 No. 15) serve this purpose, but design measures will also be required. - Human supervision (Art. 14): The HRAIS must be designed in such a way that it enables effective human supervision. This may include measures built into the HRAIS (e.g. user interfaces, a kill switch, etc.), but also instructions for the operator that enable him to understand the HRAIS sufficiently (see Art. 11, Art. 14 para. 4 and Annex IV). - Reliability, robustness and cyber security (Art. 15): An HRAIS must be designed in such a way that it is reliable and robust and ensures a sufficient level of cyber security. The provider must therefore ensure, among other things, that the HRAIS is sufficiently resistant to physical and digital threats and that suitable measures are in place to protect the integrity, confidentiality and availability of the HRAIS. The risk of bias and feedback loops must be mitigated for systems that continue to learn after being placed on the market or put into operation. The EU Commission (→ 51) should contribute to the development of benchmarks and metrics (Art. 15 para. 2 AIA). - Accessibility by design (Art. 16): Accessibility must be integrated into the design of the HRAIS. The requirements are set out in detail in Directive 2016/2102 on the accessibility of the websites and mobile applications of public sector bodies and the DIRECTIVE 2019/882 on accessibility requirements for products and services (Art. 16 lit. l). - RegistrationProviders must register HRAIS with the Commission (→ 51) if they are to be classified as HRAIS in accordance with Annex III (Use Cases) (→ 28). To do so, they must provide at least the information specified in Annex VIII Section A. |
| 5 | HRAIS | Provider | Occurrence of special risks | If the supplier becomes aware of particular risks within the meaning of Art. 79 para. 1, he must immediately investigate the causes and inform the competent market surveillance authorities (Art. 82 para. 2 → 45). |
| 6 | HRAIS | Provider | Occurrence of a serious incident | If a serious incident (→ 45) is detected, the supplier must immediately inform the competent market surveillance authorities (→ 55), investigate the incident and mitigate the risks. For suppliers of GPAIM with systemic risks, see below. |
| 7 | AIS | Provider | Placing on the market or putting into service in the EU | If an AIS is placed on the market or put into service in the EU, the provider is subject to the AIA (→ 18) and must appoint an authorized representative in the EU (→ 26). |
| 8 | AIS | Provider | Use of output in the EU | Even if an entity uses an AIS in such a way that its output is used as intended in the EU, it falls within the Scope of the AIA (→ 18) and must have a Be authorized in the EU (→ 26). |
| 9 | AIS | Provider | Dealing with AIS | A provider’s handling of AIS also triggers the requirement for AI literacy (→ ). |
| 10 | AIS | Provider | Generative AIS | In the case of AIS – this will primarily be GPAIS, but other AIS will also be covered – that generate synthetic content (audio, image, video, text), providers must ensure that the output is available in a marked in machine-readable format so that it is recognizable as artificially created or manipulated (“watermarking” → 37). |
| 11 | AIS | Provider | AIS for direct interaction with those affected | If an AIS (including a HRAIS, if applicable) is intended for direct interaction with data subjects, the provider must ensure that natural persons are informed of the Interaction with an AIS informs (if it is not obvious in the given circumstances → 37). |
| Product manufacturer | ||||
| 12 | HRAIS | Product manufacturer | Installation of an AIS in a product | Manufacturers of a regulated product that is subject to product regulation under Annex I because an AIS has been installed as a safety component (within the meaning of Art. 3 No. 14) and who place the product on the market or put it into service in their own name are deemed to be suppliers within the meaning of the AIA (Art. 25 para. 3) and have the corresponding obligations. |
| Importers and distributors | ||||
| 13 | HRAIS | Importer | Import | The obligations of the importer → 23 are much stricter than those of the supplier because the main responsibility remains with the supplier. First and foremost, the importer has the duty to verify the compliance measures of the supplier, and if he has doubts about the compliance of the HRAIS, he may not place the HRAIS on the market. If he encounters risks within the meaning of Art. 79 para. 1, he must also inform the supplier, the authorized representatives and the market surveillance authorities (Art. 23 para. 2 → 45). Further obligations arise from Art. 23 para. 3 – 7. |
| 14 | HRAIS | Operators, importers, dealers | Occurrence of special risks | If an operator or importer has reason to believe that a HRAIS poses particular risks to health, safety or fundamental rights (Art. 79), he must immediately inform both the supplier or distributor (in the case of the operator) or the supplier and his authorized representative (in the case of the importer) or the supplier and the importer or any other body involved (in the case of the distributor) as well as the competent market surveillance authority and suspend the use of HRAIS (Art. 26 para. 5, Art. 23 para. 2 and Art. 24 para. 4; Art. 82 para. 2 → 45). |
| 15 | HRAIS | Retailer | Distribution | A trader is anyone who makes a HRAIS available on the market (→ 20). The obligations of traders are similar to those of importers (Art. 24). |
| Operator | ||||
| 16 | HRAIS | Operator | Use | Operators (→ 21) must ensure that the HRAIS Inventory (this follows indirectly from Art. 26). You must also ensure that all relevant operating data is automatically Logs and stored for a specified period of time, and they must comply with the HRAIS Instructions from the provider (Art. 26 para. 1). They must also ensure that the input data is fit for purpose (i.e. appropriate to the purpose of the HRAIS) and sufficiently representative (Art. 26 para. 4 → 36). Another key factor is the human surveillanceThe operator must ensure that human supervision is possible during operation (Art. 26 para. 2) and must continuously monitor the operation of the system (Art. 26 para. 5). If they suspect a particular risk within the meaning of Art. 79 para. 1 (→ 45), they must inform the supplier or distributor and the market surveillance authority accordingly and stop using the HRAIS (Art. 26 para. 5; which presupposes that they can react accordingly). In the event of a serious incident (→ 45), the supplier and then the importer or distributor and the market surveillance authority must be informed immediately (see also Art. 73). |
| 17 | HRAIS | Operator | Use in the workplace | If an employer uses HRAIS in the workplace, it must inform the employees and employee representatives that they will be affected by its use (Art. 26 para. 7). Obligations to cooperate under the applicable law are reserved. |
| 18 | HRAIS | Operator | Use for decisions | Special requirements apply if an HRAIS is to be used for decisions (it is also possible that an AIS becomes an HRAIS as a result: Art. 25 and Annex III → 28). If the HRAIS Makes decisionsthat have legal or other significant effects must be communicated to the data subjects (Art. 13 and 25 para. 11 → 37), and in the case of automated AI decisions data subjects have the right to object (Art. 86; in addition, the relevant requirements of the applicable data protection law may of course apply). In addition, the operator must ensure that the input data for the system is relevant, correct and up-to-date (see above). |
| 19 | HRAIS | Operator | Biometric remote identification | If an HRAIS is used for remote biometric identification within the meaning of Annex III No. I lit. a, the results must be checked and confirmed separately by at least two natural, competent persons before decisions are made or measures taken (Art. 15 para. 4). |
| 20 | HRAIS | Operator | Use of an emotion recognition system or for biometric categorization | When using an emotion recognition system or a system for biometric categorization, the operators must inform the data subjects about the operation and the personal data used (→ 37). |
| 21 | HRAIS | Operator | Occurrence of a serious incident | If a serious incident is detected, the operator must immediately inform both the supplier and then the importer or distributor as well as the competent market surveillance authorities (Art. 26 para. 5 and Art. 72 → 45). |
| 22 | AIS | Operator | Operation | When operating an AIS, only the requirements for AI literacy apply (→ 38). |
| 23 | AIS | Operator | Use for deepfakes | If an AIS (which can also be an HRAIS) is used for deepfakes, the operator must disclose the artificial production (Art. 50 para. 4 → 37). |
| 24 | AIS | Operator | Generation of output | If operators use an AIS to create or manipulate text and the text is published to inform the public about matters of public interest, they must disclose the artificial creation or manipulation (Art. 50 para. 4 → 37). |
| GPAIM | ||||
| 25 | Provider | Placing a GPAIM on the market in the EU | Providers of a GPAIM fall within the scope of the AEOI if they place a GPAIM on the market in the EU (→ 18). In this case, they must appoint an authorized representative in the EU (→ 26). | |
| 26 | Provider | Offering a GPAIM for installation in an AIS | The AIA does not understand GPAIM as HRAIS, but as a preliminary stage to an AIS (→ 39). The provider of the GPAI model must Suppliers who install GPAIMThe GPAI provider must therefore provide information about the GPAI model and its development in accordance with the requirements of Annex XII (Art. 53 para. 1 lit. b). In particular, it must prepare technical documentation (Art. 53 para. 1 lit. a), but not in accordance with Annex IV, as is the case for HRAIS providers, but in accordance with its own Annex XI. Because GPAI models are mostly LLMs that are trained with a mass of data, the provider of the model must also have a Policy on compliance with European copyright law (Art. 53 para. 1 lit. c; see Q0), and it must make details of the training data publicly available (Art. 53 para. 1 lit. d; the AI Office → 52 is to draw up a template for this). However, providers who use the GPAIM under a free and open source license (FOSS) if they make the parameters of the model publicly available. A counter-exception applies to GPAI models with systemic risks (→ 39). In contrast, the general requirements for HRAIS providers do not apply to GPAIM providers (→ 39; as long as they are not also HRAIS providers). | |
| 27 | Provider | Offering a GPAIM with systemic risks | The provider of a GPAIM with systemic risks (→ 39) has the obligations of all GPAIM providers. In addition, it must first submit the model to the EUReport commissionat the latest two weeks after the model has reached the systemic risk threshold (Art. 52 para. 1). The Commission maintains a corresponding public list (Art. 52 para. 6), whereby the provider can attempt to have its model deleted as not systemically relevant (→ 41). Furthermore, the relevant provider is obliged under Art. 55 para. 1, - assess systemic risks and mitigate them if necessary, - evaluate the model with regard to risk management, including through adversarial testing or red teaming, - Document information on serious incidents (→ 45) and possible mitigation measures and inform the AI Office (→ 52) immediately, - ensure appropriate cyber security. |
Testing and validation and, in particular, training are key aspects of AIS. The AIA contains special regulations for this:
Suppliers are obligated to check the HRAIS before placing the product on the market or commissioning it. test (Art. 9 para. 6).
For the data used for training and testing purposes (see Art. 3 No. 29 and 31), suitable “Data governance and data management procedures” are applied (Art. 10 para. 1 and 2). In particular, it must be regulated how the corresponding conceptual decisions are made, which data are required and how they are obtained (in particular personal data), how data are processed (e.g. by annotation, labeling, cleansing, updating, enrichment and aggregation), how test hypotheses are formed and how possible bias is to be dealt with (Art. 10 para. 2).
Training, validation and test data must – with a view to the intended use of the HRAIS – be as relevant, representative, accurate and complete be. This also means that they must have suitable statistical characteristics (Art. 10 para. 3) and reflect or take into account the context of their use (para. 4).
Under certain circumstances, a bias can only be prevented or detected if the data used for training, testing and validation are Personal data are included. In this case, Art. 10 para. 5 exceptionally contains a legal basis within the meaning of Art. 6 and 9 GDPR, i.e. even for special categories of personal data, provided that certain conditions are met to ensure data minimization and the protection of the data concerned. This must be documented in the processing directory.
The provider must inform the downstream actors, in particular via the technical documentation and the operating instructions (→ 35). The technical documentation must include information on the training and the training datasets used (Annex IV No. 2 lit. d; also for providers of a GPAIM in accordance with Annex XI No. 2 lit. b and Annex XII No. 2 lit. c if the GPAIM is to be integrated into an AIS), and the Operating instructions must also contain information on the training, validation and test data sets used (Art. 13 para. 3 lit. b no. 6).
HRAIS must generally have a sufficient level of Cybersecurity guarantee. This also includes adequate protection against attacks during the training phase, e.g. by manipulating the training data (“data poisoning”) or pre-trained components such as a GPAIM that are used during training (“model poisoning”; Art. 15 para. 5).
Providers of a GPAIM must describe the training and testing procedure in the technical documentation in the same way as providers of an HRAIS. document (Art. 53 para. 1 lit. a), and they must prepare a summary of the content used for the training and publish (Art. 53 para. 1 lit. d; with the exception of FOSS).
The quantity of calculations used for the training is decisive for the classification of a GPAIM as one with systemic risks (Art. 51 para. 2).
The Market surveillance authorities may request access to the training, validation and test data sets, among other things (Art. 74 para. 12).
Easements for training then apply within the scope of the Real laboratories (→ 48).
These obligations are naturally directed at the providers. Operator have other, separate obligations with regard to data quality (→ 35).
The AI Act places particular emphasis on transparency, especially for AIS that make decisions. This can also apply to AIS that no HRAIS are. In particular, the Chapter IV, with its single Art. 50, contains corresponding provisions, with the first two paragraphs relating to providers and the following two paragraphs to operators.
Providers have the following obligations in particular:
System designProviders must design the HRAIS in such a way that its operation is transparent, i.e. that the expenditure can be interpreted and consciously used (Art. 13 para. 1). The AIA does not conclusively specify how this is to be ensured.
Operating instructionsHRAIS must be accompanied by operating instructions (→ 35 No. 6).
Interaction with those affectedFor AIS that are intended to interact with data subjects (e.g. chatbots), they must be informed about the interaction with an AIS – unless it is obvious in the circumstances (Art. 50 para. 1), e.g. in the case of a translation service or a chatbot such as ChatGPT. The providers of the corresponding AIS may have to ensure this. The designation as a “bot” may often be sufficient for this purpose.
Synthetic contentAIS providers must label synthetic outputs in a machine-readable format and ensure that they are recognizable as artificially generated or manipulated (Art. 50 para. 2). This obligation applies to providers, not operators (see next point). Reference should be made here to the work of the Coalition for Content Provenance and Authenticity (C2PA; https://c2pa.org/).
HRAIS with a merely supporting function for standard editing or without significant changes to the input are exempt. No labeling obligation therefore applies, for example, to DeepL or ChatGPT edited texts written by a human. In addition to the wording, this must also apply analogously to Art. 50 para. 4 if a text was generated by an AIS but revised or at least relevantly checked by a human; in this case, the human has made the text their own, which is why it should no longer be treated as synthetic.
Operators have the following obligations in particular:
In the case of obviously artistic, creative, satirical, fictional or analogous works, the reference to the artificial production or manipulation must be made in such a way that the presentation or enjoyment of the work is not impaired.
Generative AISOperators of a generative AIS must disclose that the content has been artificially generated or manipulated (Art. 50 para. 4). However, this only applies to published texts if they are intended to inform the public about matters of public interest, and not if the generated texts have been human-edited or editorially controlled and someone bears editorial responsibility for the publication. An exception then applies to the area of law enforcement.
Emotion recognitionThe operator of a (non-prohibited → 27) emotion recognition system or a biometric categorization system must inform the natural persons concerned (Art. 50 para. 3; again with an exception for the area of law enforcement).
DecisionsIf the operator of a HRAIS according to Annex III (Use Cases → 28) uses the HRAIS to make or support a decision that affects natural persons, they must be informed accordingly (Art. 26 para. 11).
Human supervisionArt. 26 contains requirements for operators to exercise human oversight, which also have a transparency aspect.
The mandatory information must be provided in a clear, unambiguous and accessible manner at the latest at the time of the first interaction or suspension (Art. 50 para. 5).
At GPAIM transparency measures are also specified, but separately in Art. 53 (see → 40 and 42). Further requirements may apply according to other provisions, e.g. in the processing of personal data from the information and transparency obligations of the applicable data protection law.
“AI literacy” or “AI competence” refers to the skills required for the competent and risk-aware use of an AIS (Art. 3 No. 56). Art. 4 therefore requires measures to impart this competence to staff and auxiliary persons (insofar as they are to handle an AIS). Training, instructions and other information can be considered for this purpose.
This “upskilling” is the only explicit obligation that the AIA imposes on providers and operators of all AIS. However, such AIS may fall under sectoral requirements, and if they are supplied to consumers, general product safety law may apply. Whether the Swiss PrHG also applies to AIS that are not installed in a product such as a robot has not been conclusively clarified. Further obligations arise for AIS in special constellations from the transparency requirements (→ 37).
GPAIM are regulated separately in their own Chapter V. This is due to the legislative history, in which the regulation of GPAIs was controversial until the end (→ 3). Within the GPAI models, a particularly sensitive category is regulated, the GPAI models “with systemic risks” (→ 41).
GPAIM are “AI models” (not a defined term) that are generally usable, perform a “wide range of different tasks competently” and can be integrated into downstream AIS (Art. 3 No. 63). This primarily concerns Large Language Models (LLMs) such as ChatGPT or Claude from Anthropic etc. General usability is assumed if a model has at least one billion parameters and has been trained with a large amount of data “under comprehensive self-monitoring” (Recital 98 → 12). By contrast, a model, e.g. an LLM, that has been trained for a narrow area of application would not be a GPAIM.
It is important to note that a GPAIM is not an AIS. It is only created by Addition of further components to AIS and, where applicable, to HRAIS (Recital 97: “should be clearly defined and distinguished from the concept of AI systems”; “although AI models are essential components of AI systems, they do not in themselves constitute AI systems”). So: GPAI model + additional component = AIS. Little is needed for the step from GPAI model to (HR)AIS: a user interface is sufficient (Recital 63).
It is also possible for a GPAIM to be built into another model, which then becomes a GPAIM (RecG 100). LLMs can also be trained further (e.g. by fine-tuning → 12). If this narrows the scope of application sufficiently, it is conceivable that the corresponding model will no longer have general applicability.
The provider of a GPAIM – i.e. the entity that develops and places the GPAIM on the market – therefore becomes the provider of the (HR)AIS as soon as it puts the GPAIM to a specific use and the resulting AIS is made available on the market or put into operation. Following this logic, Art. 53 requires, among other things, that the GPAIM provider provides the downstream AIS provider with certain Providing information (even if this is not a HRAIS).
An LLM (→ 12) from OpenAI would be an example of a GPAI model. ChatGPT, on the other hand, has a user interface and is therefore likely to be an AIS (even if this is not undisputed). If a third party uses a model from OpenAI and builds its own chatbot with it, this third party and not OpenAI is the provider of the chatbot as an AIS. Of course, this also applies if the third party in question further adapts the chatbot to its own needs by fine-tuning it.
In addition to the GPAIM, the AIA also defines GPAIS (general purpose AI systems; Art. 3 No. 66). GPAIS are a subset of AIS and are subject to the corresponding regulations. The AEOI therefore only mentions GPAIS in passing (in Art. 3 no. 68, Art. 25 para. 1 lit. c, Art. 50 para. 2 and Art. 75 para. 2, and in some recitals).
As mentioned, the obligations of the GPAIM provider (→ 20) are set out in a own chapter regulated. The requirements for HRAIS providers – in particular Art. 16 AIA and the provisions referred to therein – do not apply to GPAIM providers. However, GPAIM providers must:
a technical documentation of the GPAIM, including the training and testing procedure and the results of the assessment. The minimum information is set out in Annex XI. It must be made available to the AI Office and the competent national authorities on request (Art. 53 para. 1 lit. a). An exception applies to FOSS (Art. 53 para. 2);
document further information on the GPAIM (in particular in accordance with Annex XII) and submit it to the Providers of downstream AIS (Art. 53 para. 1 lit. b). The exception for FOSS also applies (Art. 53 para. 2);
on a strategy for Compliance with EU copyright law have. This also includes an indication of how, in the case of the text and data mining exception (→ 59), a reservation of use within the meaning of Art. 4 (3) of the Copyright Directive (https://dtn.re/c6zFb9)) is complied with (Art. 53 para. 1 lit. c. It should be noted that, according to Recital 106, this requirement also applies to non-European GPAIM providers who place a GPAIM on the market in the EU;
a summary of the Training data (the AI Office is to draw up a template for this), subject to business secrets (Recital 107).
You may also have to provide a Authorized representative (Art. 54 AEOI → 26). As elsewhere, the Commission can further specify the requirements (→ 51).
Systemic risks are risks that have a significant impact due to the “scope” of the GPAIM or due to possible negative consequences “for public health, safety, public security, fundamental rights or society as a whole” and can spread across the entire value chain (Art. 3 No. 65).
However, whether this applies to a GPAIM is not decided according to the legal definition, but according to the criteria of Art. 51 (1), according to which a systemic risk exists in two cases:
When the GPAIM is informed about “Capabilities with high efficiency”, which is to be assessed using suitable methods such as benchmarks (Art. 51 para. 1 lit. a), but in any case exists if “the cumulative amount of calculations used for its training” is more than 1025 floating point operations. Floating point operations, in turn, are defined as a mathematical quantity in Art. 3 No. 67. This threshold is likely to be adjusted in the future (Recital 111).
when the EU Commission decidesthat a systemic risk exists, with Annex XIII providing the relevant criteria (lit. b and Art. 52 para. 4 – 5). This relates to the performance of the model, expressed, among other things, by the number of parameters or the scope of the training data, but also the size of the model’s market.
The provider must first submit the GPAIM with systemic risks to the Report commission (→ 51) as soon as possible once the GPAIM has reached the systemic risk threshold, but at the latest after two weeks (Art. 52 para. 1). It may then attempt to prove that its GPAIM does not nevertheless pose any systemic risks in exceptional cases if the initial qualification is based on the material criterion of Art. 51 para. 1 lit. a. It must present appropriate arguments to the Commission. If the Commission is not convinced, the GPAIM will be entered on the list of systemic risks (Art. 51 para. 3). – If the Commission has classified the GPAIM as systemically risky ex officio, the provider may request reconsideration at any time (Art. 51 para. 5).
Provider from GPAIM with systemic risks have additional obligations, i.e. in addition to the obligations of providers of less sensitive GPAIM. They must (Art. 55):
evaluate the GPAIM in a standardized manner;
assess and reduce systemic risks at EU level;
document information on serious incidents and possible remedial measures and inform the AI Office and the competent national authorities if necessary; and
ensure adequate cyber security.
Market surveillance is a central element of the AIA – it is intended to ensure both the compliance of AIS in the interests of the persons concerned and a level playing field.
Providers must therefore have a Market observation system after the HRAIS has been placed on the market (Art. 71 para. 1). This includes the collection, documentation and evaluation of data on the performance of the HRAIS (which may be procured via the operators) during the entire lifecycle of the HRAIS.
This system includes in particular a Plan for the observation of the HRAIS after it has been placed on the market. This plan is in turn part of the technical documentation in accordance with Annex 4 (→ 35 No. 6); the Commission is yet to specify what such a plan should look like (Art. 72 (3)). If a HRAIS falls under Annex I Section A (e.g. medical devices), providers can also integrate the requirements of the AIA into existing systems and plans (Art. 72 para. 4).
Market surveillance also includes the obligation to Non-compliance to react (→ 44), certain Incidents (→ 45), and the corresponding powers of the authorities.
In general, AIS also represent products within the meaning of the Market
monitoring ordinance (Art. 74 para. 1; https://dtn.re/JgakBQ)). The market surveillance authorities (→ 55) can therefore take action whenever an AIS – it does not have to be an HRAIS – is likely to endanger the health or safety of users and does not comply with the applicable harmonization legislation (Art. 16 para. 1 of the Market Surveillance Regulation).
It is not only necessary to react to serious incidents (→ 45), but of course also whenever a HRAIS no longer meets the relevant requirements. The AIA not only places responsibility on the provider, but also on other stakeholders.
If suppliers have reason to believe that a HRAIS no longer complies with the AIA at any time after it has been placed on the market or put into service, they must rectify the defect immediately or, if necessary, withdraw the HRAIS from the market. withdraw, deactivate or recall (Art. 20 para. 1). “Withdrawal” means that the provision of a HRAIS already in the supply chain is prevented (Art. 3 No. 17), and “recall” means that HRAIS are returned or at least taken out of service or switched off (Art. 3 No. 16).
Providers also have to downstream market accordingly, i.e. the traders, the operators, the authorized representative and the importers (Art. 20 para. 1). If the HRAIS also entails a risk in accordance with Art. 79 para. 1 AEOI, the corresponding obligations apply (→ 45).
The downstream actors are also included in the event of non-compliance. In this case, importers may only place the HRAIS on the market once compliance has been restored (Art. 23 para. 2), and the same applies to distributors with regard to making it available on the market (Art. 24 para. 23).
Also Authorized representative have tasks: If they have reason to believe that the supplier is in breach of the AIA, they must terminate their mandate and inform the competent market surveillance authority and, if applicable, the notified body of this and the reasons (Art. 22 para. 4).
As part of market surveillance (→ 43), certain incidents must be documented and reported. This obligation applies to the providers of HRAIS, and is implemented by “Serious incidents” are triggered. These are malfunctions, but also generally incidents that lead directly or indirectly to death or serious harm to health, to a “serious and irreversible” disruption of the management or operation of critical infrastructure, to a violation of fundamental rights or to serious damage to property or the environment (Art. 3 No. 49).
If such an incident occurs, the provider must report the incident to the responsible Market surveillance authorities (→ 54), whereby special rules apply for certain HRAIS. The notification must be made immediately upon discovery by the provider, but no later than 15 days after knowledge by the provider or also by the operator (Art. 73 para. 2).
If an incident has widespread effects (“widespread infringement”) or affects a critical infrastructure, the reporting period is shortened to two days (Art. 73 para. 3 AIA), and in the event of death on ten days (para. 4). As in data protection law or for reports to FINMA, an initial report and a follow-up report can be used.
After the notification, the market surveillance authorities inform the competent national authorities. If necessary, they must also order within seven days that the HRAIS be recalled or withdrawn from the market or that its making available on the market be prohibited (Art. 73 para. 8 in conjunction with Art. 19 of the Market Surveillance Regulation ( Art. 19 of the Market Surveillance Ordinance (https://dtn.re/ElQE2G).
The provider must also investigate the incident and assess the risks and, where possible mitigate (Art. 73 para. 6 AEOI), in cooperation with the competent authorities.
In addition to the providers Operator Obligations in the event of a serious incident. You must report such incidents to the Inform provider (Art. 26 para. 5 and Art. 72). In the case of particularly sensitive HRAIS or when used in critical infrastructures, contractual provisions on this reporting obligation are to be expected in practice, even if it already arises from the AEOI.
A distinction must be made between serious incidents and cases where an HRAIS leads to particular risks, i.e. atypically high risks for health or safety or fundamental rights (Art. 79 para. 1). In this case, various roles have corresponding duties. If a market surveillance authority has reason to believe that such risks exist, it examines the AIS in question and – if the assumption is confirmed – informs the competent national authorities. Also Operator have special obligations in such a system if it is a HRAIS.
All persons (natural and legal persons) have the right to lodge a complaint with the competent market surveillance authority (→ 55) if they have reason to believe that a provision of the AIA has been violated (Art. 85 para. 1). A person does not have to be particularly affected – competitor complaints are also possible.
From a considerable Decision data subjects also have the right to request an explanation from the operator regarding the role of the AIS in the decision and the key elements of the decision (→ 35 no. 13).
Affected parties also have the right to lodge a complaint with the AI Office (Art. 89 para. 2). This also applies to providers who have incorporated a GPAIM into their own AIS.
In addition, there are rights under other legal bases, in particular also under the applicable Data protection law (→ 58) and, if applicable, according to contractual regulations. Claims for damages may also be possible under certain circumstances.
Implementing the requirements of the AIA will be challenging for SMEs, at least if they are active as providers. Anyone who purchases a GPAIM and places it on the market as an HRAIS becomes a provider of the HRAIS – there are therefore likely to be a large number of SMEs that cover a specific use case on the basis of an LLM and are providers for this use case.
In principle, the provisions of the AEOI tel quel also apply to SMEs. However, the AEOI contains some provisions that are intended to support SMEs in the English version:
Art. 62 obliges the Member States to take support measures by granting SMEs priority access to AI real-world laboratories, carrying out awareness-raising and training measures for SMEs, allowing questions on the AEOI and AI real-world laboratories to be addressed and involving SMEs in the development of standards (→ 15).
SMEs should participate in the Advisory Forum (Art. 67 para. 2).
The interests of SMEs must be taken into account in codes of conduct (Art. 95 para. 4).
A slightly lower rate applies to fines (Art. 99 para. 6).
For micro-enterprises within the meaning of Commission Recommendation C(2003)1422 (https://dtn.re/U7vlKH)), Art. 63 (1) also provides for a simplification of the QMS (→ 35).
The AIA is committed to promoting innovation in various recitals, and its greatest contribution to promoting innovation is probably the fact that it is not a prohibition law (with the few exceptions, Q0). Chapter VI (Art. 57 ff.) is then expressly dedicated to the promotion of innovation.
Two main elements serve this purpose. The first element is the “AI real laboratories” (the corresponding English term is “AI regulatory sandbox”):
This involves facilitating the development, training, testing and validation of AIS before they are placed on the market or put into service in accordance with a plan to be agreed between the providers and the competent authority (Art. 57 para. 5) and, if necessary, with the involvement of the data protection authorities (para. 10).
Art. 59 then contains a limited legal basis for the processing of Personal data in the context of a real-world laboratory: Personal data may be processed for development, training and testing in the real-world laboratory, but only if certain conditions are met and only when developing an AIS to safeguard certain public interests. This legal basis is in addition to the analogous legal basis for testing purposes under Art. 10 (→ 36).
Providers can then receive proof of the activities carried out in the real laboratory and a final report, which makes the Conformity assessment procedure or to facilitate market surveillance (para. 7). Compliance with the plan also provides a safe harbor against fines in the event of a violation of the AEOI in connection with the plan, but possibly also other requirements, in particular data protection law (para. 12).
Each Member State must set up at least one such laboratory by August 2, 2026 (Art. 57 para. 1). However, the Commission is to issue more detailed regulations before then (Art. 58 AIA).
The second element are tests of Annex III-HRAIS under Real conditions:
HRAIS according to Annex III (i.e. the use case-related HRAIS; Q28) can be carried out outside an AI real laboratory under real conditions under certain conditions (Art. 60). This requires that the test is controllable, i.e. that the test is effectively monitored and that predictions, recommendations or decisions of the AIS can be reversed or disregarded (Art. 60 para. 4 lit. j‑k). Serious incidents must be reported in accordance with Art. 73, i.e. the corresponding reporting obligation (→ 45) is brought forward to the time before placing on the market or putting into service (Art. 60 para. 7).
Tests must be based on a plan to be approved by the competent market surveillance authority (Art. 60 para. 4 lit. a‑b).
Insofar as the plan requires the participation of test participants, they must in principle consent to participation (Art. 61 para. 4 lit. j and para. 5).
Chapter XII concerns sanctions for violations of the AEOI. Unlike the GDPR, the AEOI itself does not contain any specific provisions on fines, but requires the member states to introduce provisions on fines and other enforcement measures in Art. 99. Fines can be imposed on all actors, i.e. all entities involved in value creation.
Depending on the type of infringement, the fines can reach up to EUR 35 million or 7% of turnover:
In the event of a violation of the prohibited practices (→ 27), the upper fine amount of up to EUR 35 million or 7% of the worldwide annual turnover applies (Art. 99 para. 3). As with the GDPR, the group turnover is likely to be decisive for this.
For certain other injuries the upper limit for fines is EUR 15 million or 3% of annual turnover (Art. 99 para. 4). These fines can be imposed on operators as well as notified bodies. This concerns violations of Art. 16 (suppliers), Art. 22 (authorized representatives), Art. 23 (importers), Art. 24 (distributors), Art. 26 (operators) and Art. 31, 33 para. 1, 3 and 4 and Art. 34 (notified bodies) as well as Art. 50 (transparency; suppliers and operators).
In the case of wrong answers to notified bodies or the competent national authorities, the fine limit is EUR 7.5 million or 1% of the annual turnover (Art. 99 para. 5).
The higher amount is decisive in each case, except in the case of SMEs (here the lower amount; Art. 99 para. 6; → 47). In the specific case, the court or administrative authority (Art. 99 para. 9) must take into account the criteria of Art. 99 para. 7 when determining the fine, including the severity of the fault.
At Providers of GPAIM Art. 101 contains a special provision. All violations of the AEOI can be punished with a fine (Art. 101 para. 1 lit. a); however, Art. 101 para. 1 specifically mentions certain violations. The fine limit here is EUR 15 million or up to 3% of the annual turnover.
The occurrence of a serious incident must of course be distinguished from injuries (→ 45).
The AEOI regulates the role of several authorities primarily in its own chapter on “Governance” (Chapter VII, Art. 64 ff.). Various authorities and institutions are entrusted with different and partly overlapping tasks. There is both a horizontal division of labor (within the EU) and a vertical division of labor (between the EU and the Member States).
The former is governed by Section 1 of Chapter VII (Governance). The Commission plays the leading role in the EU bodies and is generally responsible for enforcing the AEOI. It has far-reaching powers, can issue specific provisions and is responsible for receiving notifications from stakeholders and other authorities (→ 51).
The AI Office (“Office for Artificial Intelligence”) is part of the Commission and is responsible for the market surveillance of GPAIM and AIS based on GPAIM of the same provider (Art. 88 and 75; Q52).
The European AI Board (EAIB) is to advise and support the Commission (and the Member States) in this (→ 53).
The national Market surveillance authorities are responsible for monitoring compliance with the AEOI (→ 54).
The notifying national authorities are responsible for the assessment, designation, notification and monitoring of AI conformity assessment bodies (Art. 28).
Conformity assessment bodies are in turn bodies that check and assess the conformity of AIS in accordance with the AEOI (Art. 3 para. 21 → 15).
Due to the extensive cooperation obligations of the stakeholders and the wide-ranging information gathering possibilities of the authorities, the Commission, the market surveillance authorities and the notified bodies and all other bodies involved in the application of the AIAI are subject to a Confidentiality obligation (Art. 78).
The main role at EU level lies with the Commission and the AI Office as part of the Commission (→ 52).
The Commission, which monitors compliance with EU law in accordance with Art. 17 Para. 1 of the Treaty on European Union, has a central role in this. Its powers can be categorized as follows (not entirely complete – some other, subordinate tasks of the Commission are not listed):
Concretizing legitimizationArt. 97 AEOI confers on the Commission, on the basis of Art. 290 of the EU Treaty (https://dtn.re/9MhpKX)) the right to adopt binding “delegated acts”. The EU Treaty distinguishes between delegated acts and implementing acts. Delegated acts are legal acts supplementing or amending the basic legal act (the AEOI, which the Commission submits to the Council and Parliament for approval or rejection. Implementing acts are merely implementing provisions such as technical provisions, exemptions, etc., which are not submitted to Parliament and the Council.
The authority, delegated acts is based on Art. 97 AEOI and is intended to make it possible to reflect the particularly rapid technical developments in the area of AI. This concerns the following points:
the criteria for when an AIS becomes a HRAIS (→ 28), and by analogy Annex III (Use Cases; Art. 7 para. 1 and 3);
Annex IV on the minimum content of the technical documentation (Art. 11 Para. 3);
Annexes VI and VII and Art. 43 (1) and (2) on the conformity assessment procedure and Annex V on the content of the EU declaration of conformity;
the criteria for classifying a GPAIM as systemically risky in accordance with Art. 51 (1) and (2) and Annex XIII;
Annexes XI and XII on the content of the technical documentation and transparency requirements for downstream use of GPAIM (Art. 53).
In addition, the Commission can Implementing acts issued. In doing so, it must generally comply with the Implementing Powers Ordinance (https://dtn.re/B9uV04)) (Art. 98 para. 2:
Intervene if a Member State does not meet the requirements for notified bodies not fulfilled (Art. 37 para. 4);
Approval of Practical guides in connection with GPAIM pursuant to Art. 56, in general and in particular to specify the transparency requirements for AI-generated or manipulated content (Art. 50 para. 7), the obligations of GPAIM providers pursuant to Art. 53 and of systemically risky GPAIM pursuant to Art. 55 (Art. 56 para. 6);
Decree common specificationsif relevant standards are missing (Art. 41 AIA), and common rules in the area of GPAIM if no code of conduct exists by August 2, 2025 (Art. 56 para. 9)
Concretizing regulations for AI real laboratories (Art. 58 para. 1 and 2) and tests of HRAIS under real conditions (Art. 60);
Provisions for the establishment of a scientific panel independent expert (Art. 68 para. 1 and 5 and Art. 69 para. 2);
Concretizations for the Market observation plan the provider of HRAIS (Art. 72 para. 3);
Concretization of the Sanction procedure (Art. 101 para. 6).
The Commission may then be authorized by the Issuing guidelines and standardization contribute to the standardization of practice:
The Commission should generally hold the reins in the application of the AEOI. For example, it issues Standardization orders in accordance with Art. 10 of the Standardization Ordinance, (https://dtn.re/BRL10Q)), i.e. mandates for the development of those standards whose compliance gives rise to a presumption of conformity (Art. 40 (1) and (2)), and can – in the absence of relevant standards – issue corresponding “common specifications” (Art. 41 AIA.
According to Art. 96 AEOI, it can also generally Guidelines for the practical implementation of the AEOI. Although Art. 96 contains a list of points to be concretized – in particular the definition of AIS, the application of Art. 8 et seq. with the basic requirements, the classification as HRAIS (Art. 6 para. 5), the prohibited practices and transparency in accordance with Art. 50 AEOI – it is not exhaustive.
The Commission Approved furthermore Practical guides in accordance with Art. 56 AEOI, i.e. a specification of the obligations of GPAIM providers.
It also Templates and forms which are likely to be of considerable importance in practice. It is intended to provide a simplified form for the technical documentation for HRAIS of SMEs (Art. 11; Annex IV).
The Commission continues to Messages and Reports against:
real-time biometric remote identification for law enforcement purposes: Notification by Member States of relevant legal bases (Art. 5 (5)) and annual reports by national market surveillance and data protection authorities (Art. 5 (6));
Conformity assessment: Notification by the notifying authorities of conformity assessment bodies (Art. 30 (2) f. and Art. 36 (1), (4) and (7)); notification by the market surveillance authorities of exemptions for HRAIS under Art. 46 (1) (Art. 46 (3); the Commission may intervene);
GPAIM: Notification of GPAIM providers with systemic risks (Art. 52 para. 1);
Notification of the notifying authorities and market surveillance authorities by the Member States (Art. 70 (2) and (6));
Notification of the national authorities of serious incidents (Art. 73 para. 11) in accordance with the Market Surveillance Ordinance; (https://dtn.re/ubfeIK);
annual notification by the market surveillance authorities of information from market surveillance and the use of prohibited practices (Art. 74 para. 2);
Notification by the Member States of the national authorities or public bodies responsible for the supervision of the protection of fundamental rights (Art. 77 (1) and (2));
Information from the Member States in connection with risky AIS within the meaning of Art. 79 (1)(Art. 79 (3) et seq.);
Information from the Member States in connection with risky AIS that the provider has classified as not high-risk (Art. 80 para. 3) and with compliant HRAIS that nevertheless entail a particular risk (Art. 82 para. 1 and 3);
Notifications from the Member States on their sanctioning and other enforcement provisions and on their fining practices (Art. 99 para. 2 and 11); notification from the EDPS on his fining practices (Art. 100 para. 7).
The Commission has further Intervention and decision-making powers:
Sanctioning of providers of GPAIM (Art. 101 para. 1);
Objections to exceptional authorizations for HRAIS pursuant to Art. 46 para. 1 (Art. 46 para. 4 and 5);
Classification of a GPAIM as systemically risky (Art. 52 para. 2 – 5);
Assessment of the procedures that providers of GPAIM or systemically risky GPAIM can use to provide evidence of their respective obligations under Art. 53 or 55 (where no harmonized standards exist; Art. 53 para. 4 and 55 para. 2);
Intervene if an AIS with particular risks within the meaning of Art. 79 para. 1 is not compliant or a compliant HRAIS is nevertheless particularly risky and the Commission does not agree with the measures taken by the competent market surveillance authority (Art. 81 and 82).
Finally, the Commission provides information through Publications and announcements:
List of notifying bodies (Art. 35 para. 2);
List of systemically risky GPAIM (Art. 52 para. 6);
List of central contact points of the Member States (Art. 70 (2));
HRAIS database in accordance with Annex III (Art. 71);
Reporting to Parliament and the Council (Art. 112).
And finally, the Commission Enforcement powers at GPAIM:
GPAIM are specifically regulated in Chapter V. The Commission is tasked with enforcing the provisions of this chapter; this is regulated in its own Section 5 of Chapter IX (post-market surveillance; exchange of information and market surveillance). It must be kept informed accordingly by the market surveillance authorities (Art. 73 para. 11, Art. 74 para. 2, Art. 77 para. 2, Art. 79 para. 3 ff., Art. 80 para. 3).
The Commission can intervene if it does not agree with measures taken by the Member States, AIS or HRAIS with particular risks (Art. 81 and Art. 82 para. 4 f.).
It is also generally responsible for enforcing Chapter V (Art. 88 para. 1). To this end, it can request information from GPAIM providers (Art. 91 para. 1 and 3 and Art. 92 para. 3), appoint experts to assess GPAIMs (Art. 92 para. 2) and require GPAIM providers to comply with their obligations, take risk mitigation measures and withdraw a GPAIM from the market (Art. 93 para. 1).
The AI Office (https://dtn.re/cvmxvL)), albeit with a slightly different designation, namely as “European Office for Artificial Intelligence”; both are the AI Office (AI English is catching on). It is part of the Commission’s Directorate-General for Communication Networks, Content and Technology. It has more than 140 employees and is divided into five departments, “Excellence in AI and Robotics Unit”, “Regulation and Compliance Unit”, “AI Safety Unit”, “AI Innovation and Policy Coordination Unit” and “AI for Societal Good Unit”.
The tasks of the Office are set out in Art. 3 No. 47, Art. 64, other provisions of the AIA and the aforementioned resolution, which lists these and other tasks and the powers of the Office. The main tasks are as follows:
Coordinative tasks (e.g. cooperation with stakeholders, other Commission departments, other EU bodies and with the Member States and their authorities);
Technical contributions (e.g. the monitoring of economic and technical developments, the drafting of guidelines and model conditions [Art. 25, 27 para. 5, 50 para. 7, 53 para. 1 lit. d, 56 and 62 para. 2] and the preparation of Commission decisions (Art. 56);
the Market supervision on GPAIM and on AIS that a provider builds on the basis of its own GPAIM (Art. 88 and Art. 75 and Art. 3 of the aforementioned Commission Decision). It checks compliance with the AEOI by the relevant actors and also serves as a point of contact for reporting serious incidents (→ 45).
In addition, the Office also oversees the AI Pact (https://dtn.re/WJfwxl).
The “European Panel on Artificial Intelligence” (“AI Panel”; also “EAIB”, for “European AI Board”, https://dtn.re/QQhGJ7).
It is to advise and support the Commission and the member states in order to facilitate the uniform and effective application of the AEOI (Art. 66 contains a list of its tasks; further tasks are defined in the AEOI). To this end, it supports the AI Office in the creation of practical guidelines, among other things. The EDPB and the Commission often take opposing positions on the application of the GDPR; it remains to be seen whether this will also be the case with the AEOI.
A Advisory forum supports the EAIB and the Commission with technical expertise. The Advisory Forum is made up of representatives from industry, start-ups, SMEs, civil society and academia as well as European institutions (e.g. the European Committee for Standardization CEN or ENISA) (Art. 67).
In addition, the Commission is to set up a scientific panel of independent experts (Scientific committeeScientific panel of independent experts”). It is made up of independent experts and is intended to support the AI Office in its market surveillance activities with scientific and technical expertise (Art. 68).
The Market surveillance authorities (Art. 3 No. 26 and 48) are responsible for market surveillance of HRAIS and GPAIM (Art. 74 ff.). Each Member State must appoint at least one such authority (Art. 70 (1)). In the case of regulated products (Art. 6 para. 1), the competent authorities there are generally also the market surveillance authority under the AIA (Art. 74 para. 3), in the financial sector the Financial Market Authority (Art. 74 para. 6), and for EU public bodies it is the EDPS (Art. 74 para. 9). AIS that are based on a self-developed GPAIM (e.g. ChatGPT) are a special case; here, market surveillance lies with the AI Office (→ 52).
The Powers and tasks are based in particular on the Market Surveillance Ordinance (Art. 3 No. 26; Art. 14 ff. of this Ordinance; https://dtn.re/QCMYaE) and the requirements of Art. 70 para. 1. For example, they can
in the case of a serious incident order that a HRAIS be recalled or withdrawn from the market or that it not be made available on the market (Art. 19 of the Market Surveillance Ordinance);
request information from providers for their activities at any time (Art. 74 para. 12 and 13 and 75 para. 3), the Obligations to cooperate have, and they can
You may be able to Test from HRAIS order (Art. 77 para. 3).
If an AIS has a special risk for the health and safety of persons, health and safety in the workplace, consumer protection, the environment, public safety and other public interests (Art. 79 para. 1 in conjunction with Art. 3 no. 19 of the Market Surveillance Ordinance), the competent market surveillance authority may check the conformity of the AIS concerned and, if necessary, order corrective measures and a recall (Art. 79 para. 5).
In the case of AIS, which the provider describes as not high-risk the market surveillance authority may – if it is of a different opinion – order that conformity be established (Art. 80 para. 1 and 2). It may also order the establishment of conformity in the case of compliant but nevertheless particularly risky HRAIS order corrective measures (Art. 82 para. 1). It can also take measures in the event of formal errors, e.g. if a CE mark is missing (Art. 83).
The market surveillance authorities also have the following tasks in particular under the AIA:
Information from the Commission on certain legal provisions relating to real-time remote biometric identification for law enforcement purposes (Art. 5 (4) and (6));
Receipt of information and messages, in particular the following:
the provider and operator of HRAIS on particular risks (Art. 79 para. 2 and Art. 26 para. 5); the operator of a HRAIS on serious incidents (Art. 26 para. 5);
Copy of the appointment order and its termination from representatives of non-European HRAIS providers (Art. 22 para. 3 and 4);
Information on non-compliant HRAIS from importers (Art. 23 para. 2);
Fundamental Rights Impact Assessments (FRIA) of public bodies (Art. 27 para. 3);
Information on tests of HRAIS under real conditions (Art. 60);
Reports of serious incidents to HRAIS (Art. 73 para. 1);
Information from other bodies:
national authorities and public bodies in accordance with Art. 77 if a serious incident at a HRAIS has been reported to it (Art. 73 para. 7);
the Commission for measures in the event of serious incidents (Art. 19 para. 1 of the Market Surveillance Ordinance);
annual reporting to the Commission (Art. 74 para. 2);
Exceptional approval of a HRAIS according to Art. 46;
Approval and review of the Tests of HRAIS under real conditions (Art. 60 para. 4 lit. b, Art. 76); if necessary, also intervene if a serious incident occurs or a test does not comply with the applicable conditions (Art. 76 para. 3 and 5);
Acceptance of Complaints natural or legal persons (Art. 85).
Conformity assessment bodies carry out the Conformity assessment (Art. 3 No. 21). They are appointed by the notifying authorities (Art. 28 para. 1, 29 para. 1 and 30 para. 1 → 57) and must meet the requirements of Art. 30. In particular, they must be independent. Conformity assessment bodies in third countries may also operate under the AEOI, provided that a corresponding agreement exists with the EU (Art. 39). A conformity assessment body is called “notified body” if it has been notified in accordance with the relevant provisions (Art. 3 No. 22). Conformity assessment procedure → 15.
Each Member State must appoint a notifying authority (Art. 28 (1) and 70 (1)). It is responsible for establishing the procedures for the assessment, designation and notification of to set up conformity assessment bodies and to monitor them (Art. 3 No. 19). They are called “notifying authorities” because they must inform the Commission (→ 51) and the other Member States about each conformity assessment body via a notification instrument managed by the Commission; only then do the conformity assessment bodies become notified bodies and can start their work (→ 56).
Data protection is of considerable importance at AIS, particularly in connection with the training of GPAIM. The AIA therefore often refers to the GDPR, in particular for terms legally defined therein (Art. 3 No. 37, 50, 51 and 52) or declaratively to provisions of the GDPR (e.g. in Art. 26 para. 9 for the use of the operating instructions in a data protection impact assessment or in Art. 50 para. 3 for the information of data subjects), but clarifies that the GDPR applies without restriction to the processing of personal data (Art. 2 para. 7, 10 para. 5).
Art. 10 para. 5 contains the only special Legal basis in the AIA. There is a conflict of objectives between data minimization and the relevance of the training data. The AEOI resolves this conflict by allowing even particularly sensitive personal data to be processed in exceptional cases if this is absolutely necessary when training a HRAIS in order to identify and reduce bias (more precisely: the prohibition in Art. 9 para. 1 GDPR is lifted in this respect; a legal basis under Art. 6 remains necessary; ECJ, Case C‑667/21, https://dtn.re/ATzHFf). However, the special conditions pursuant to Art. 10 para. 5 lit. a‑f must be observed.
More important than this question is the discussion about the scope of application of data protection law to LLM training and more broadly when within the scope of an LLM Personal data which party plays which role and how the rights of data subjects can be ensured. A discussion on this is currently taking place. Reference should be made in particular to the following documents and statements (in chronological order):
David Rosenthal, blog post, July 17, 2024 (https://dtn.re/0SBhz2)
David Vasella, datenrecht.ch, July 16, 2024 (https://dtn.re/BuTaCE)
Hamburg: Discussion paper “Large Language Models and personal data” of the HmbBfDI, July 15, 2024 (https://dtn.re/BuTaCE)
The position taken by the HmbBfDI in particular that an LLM cannot contain personal data because it does not copy input data, but mathematically represents relationships between tokens as vectors or tensors, falls short because the aggregate state of personal data is not relevant: If personal information is not stored as such, but in the form of mathematical relationships in a form that can be reproduced in principle, this is a processing of personal data (cf. datenrecht.ch, https://dtn.re/BuTaCE). The question of how data subjects’ rights can be implemented in LLMs, for example, is therefore not irrelevant.
Data protection authorities have also taken a position outside the issue of the personal reference of embeddings (→ 12) on the Relationship between data protection and artificial intelligence expressed, for example:
EDSA, Statement 3/2024 on data protection authorities’ role in the Artificial Intelligence Act framework, July 16, 2024 (https://dtn.re/vGUUWh)
DSK, Guidance on Artificial Intelligence and Data Protection, May 6, 2024 (https://dtn.re/S63kDn)
BayLDA, in the 29th Activity Report 2019 (https://dtn.re/rg7FEr)
ICO, various information on AI topics (https://dtn.re/g91v0E)
AustriaFAQ on the topic of AI and data protection of the Austrian DPA, July 2, 2024 (https://dtn.re/Sz4sDS)
FranceCNIL, Self-assessment guide for artificial intelligence (AI) systems (https://dtn.re/44hM5n)
ItalyGarante, Information on the protection of personal data against scraping, March 20, 2024 (https://dtn.re/TuzT85)
Switzerland: See → 63
Several European data protection supervisory authorities (“SAs”) had also Investigations against OpenAI initiated in connection with ChatGPT. The EDPB set up a corresponding task force in April 2023, whose work is still ongoing; a brief interim report was published on May 23, 2024 (https://dtn.re/HyvPHo).
In the area of copyright law, the AIA recognizes the problem of training with protected works. It does not comment on the content of this problem, but requires the providers of GPAIM among other things, to have a strategy for compliance with EU copyright law and to publish a summary of the training data (→ 39).
Otherwise, however, the allocation of exclusive rights and the determination of their scope and corresponding limitations is left to the relevant provisions. In this context, it is primarily discussed under which conditions the use of copyrighted works for the Training of an LLM is infringing – an understandable discussion, since LLMs compete in particular with the creatives whose works they have been trained with.
The Principle of territorialityWhether an act is copyright infringing is determined by the law of the country in which it is located (for Swiss conflict of laws: Art. 110 IRPG). In the EU, this is the copyright law of the individual member states. However, see → 40 on the question of whether the requirement for GPAIM providers outside the EU must comply with the provisions of the AIA on copyright strategy.
In Switzerland, the relevant regulations can be found in the CopA. In particular, the scope of the limitation provisions is unclear, whereby a distinction must be made between the procurement of copyrighted material and its use for training purposes:
Procurement and the generally associated Duplication of material is, in contrast to the enjoyment of a work such as searching through text or labeling for supervised learning (→ 10), relevant under copyright law (as long as the concept of reproduction is not limited to acts that are intended to make the work perceptible). If there is no License – which can be expressly or tacitly granted – the question therefore arises as to whether the limitation of personal use pursuant to Art. 19 para. 1 CopA applies. Here there is currently Legal uncertainty:
One of the issues being discussed is whether training is covered by the reproduction and provision for the “internal information or documentation” (Art. 19 para. 1 lit. c CopA). Since such reproduction is essentially only exempted for non-commercial purposes and therefore should not generally cover the training of an LLM and since the reproduction of commercially available copies of works is not covered (Art. 19 para. 3 lit. a CopA), this restriction will often not apply.
Also under discussion is so-called “Text and data mining” (TDM), which exempts reproduction for scientific purposes if it is technically conditioned, e.g. through the semantic analysis of the source material (Art. 24d CopA). Although the concept of science is broad, applied research by private companies also requires a serious cognitive purpose. Whether the fact that a trained LLM can be used for different purposes is sufficient to attribute the required cognitive purpose to the training is uncertain; in any case, it is not enough that a trained LLM can be used for research purposes, the research purpose would have to encompass the training.
In addition, the procurement of the works used must be lawful (Art. 24d CopA), which cannot be affirmed (or denied) across the board in the case of publicly available works, for example.
For its part, the output is hardly protected by copyright, because an intellectual, i.e. human creation is missing (Art. 2 para. 1 CopA); in any case, unless the output was demonstrably provided by a natural person. For the same reason, an AI cannot be an inventor in the sense of patent law – here too, protection presupposes that the inventor is a human being.
The AIA contains a few provisions specifically related to the use of AIS in the work context:
The use of an AIS is prohibited in a few cases according to Art. 5 (→ 27). This may be the case in the workplace, e.g. for emotion recognition in the workplace, if the vulnerability of employees is to be exploited or if social scoring would take place;
The term HRAIS includes workplace-related use cases (→ 28), for example when AIS is used to manage access to vocational training and further education, is used in the recruitment or selection of job applications or is used in decisions on working conditions, promotions or dismissals. job applications or is used for decisions on working conditions, promotions or dismissals.
Before commissioning or using a HRAIS in the workplace, the operator must inform the employee representatives and the affected employees that they will be “subject to the use of the high-risk AI system” (Art. 26 para. 7 AIA).
Information must also be provided if a HRAIS is used – also, but not only in the work context – to make or support decisions (Art. 26 para. 11 AIA).
Otherwise, however, the protection of employees and applicants is left to the other provisions of the applicable law, in particular data protection law and public employment law, which may provide for participation rights.
However, legislative projects are underway in the EU to improve the protection of employees. The EU’s draft Platform Directive (https://dtn.re/G3ytlM), the approval of the Council is still pending.
Several standards and standardization initiatives deal with AI. The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) have jointly developed standards:
ISO/IEC 42001:2023 (https://dtn.re/L8KOIs): Requirements for AI management systems
ISO/IEC TR 24028:2020 (https://dtn.re/YYy0Ha): Trustworthiness of AI systems, criteria for transparency, control and explainability
ISO/IEC 5259 – 1: Basis of the ISO 5259 series regarding data quality for analyses and ML (https://dtn.re/TggI5G)
ISO/IEC TR 5469:2024: Use of AI in safety-related functions (https://dtn.re/vbc8IL)
In Europe, CEN (European Committee for Standardization) and CENELEC (European Committee for Electrotechnical Standardization) are involved in the development of AI standards via the joint committee CEN-CENELEC JTC 21 “Artificial Intelligence”. It has published several standards, and more are being developed (https://dtn.re/Gx0XMT). Published were for example:
CEN/CLC ISO/IEC/TR 24027:2023: Bias
CEN/CLC ISO/IEC/TR 24029 – 1:2023: Assessment of the robustness of neural networks
The American National Institute of Standards and Technology (NIST) then developed an AI risk management framework, the AI RMF 1.0, published in January 2023, which was subsequently supplemented by “profiles”, implementations for specific circumstances, applications or technologies. One example is NIST AI 600 – 1 “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile” (https://dtn.re/z3H7BJ).
On May 17, 2024, the Council of Europe (not the Council of the European Union) adopted the AI Convention of the Council of Europe (Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, AI Convention). The text of the AI Convention is available together with the Explanatory Report on datenrecht.ch (https://dtn.re/8zndsz), in English.
The convention is a framework agreement to be implemented by the ratifying states – of which Switzerland will certainly be one – which is intended to ensure standards with regard to human rights, democracy and the rule of law when using AI systems.
Members and non-members of the Council of Europe are now invited to sign and ratify the Framework Convention. If Switzerland ratifies the Convention, it must transpose it into Swiss law (→ 63).
The requirements of the AI Convention are very vague. In addition, it only binds the member states when legislating in the public sector. In the private sector, the member states are only required to do so in a way that is “compatible with the object and purpose” of the AI Convention (Art. 3 para. 1 of the AI Convention).
There is currently no overarching regulation of the use of artificial intelligence in Switzerland. At the end of 2023, the Federal Council instructed DETEC to develop an EU digital policy as part of the interdepartmental coordination group Possible approaches until the end of 2024 for regulation (see the media release, https://dtn.re/uV1Eau). DETEC, or OFCOM on its behalf, should start from the applicable law and find regulatory approaches that are compatible with both the AEOI and the AI Convention (→ 62).
By the end of 2024, OFCOM’s analysis, including the basic studies prepared for this purpose, e.g. on regulatory gaps in current law, and the Federal Council’s decision on the direction to be taken should be available.
However, it is currently unclear which approaches DETEC is proposing and which will ultimately prevail. One Full adoption of the AIA is unlikely to have much of a political chance as long as the EU does not make this a condition for participation in the single market, and the AI Convention is so vague that its content hardly anticipates regulation, especially not in the private sector (→ 62). The business community (but also academia) is insisting on lean regulations, while civil society organizations are calling for stricter provisions, particularly to protect against discrimination (e.g. Algorithm Watch). The most obvious option at present would appear to be a blanket decree that selectively adapts the relevant legal bases.
Various political initiatives are also pending, such as the following (at federal level):
24.3796, Motion Glättli, June 14, 2024, Transparent risk-based impact assessments for the use of AI and algorithms by the federal government (https://dtn.re/vWwoDP)
24.3795, Motion Glättli, June 14, 2024, Protection against discrimination in the use of AI and algorithms (https://dtn.re/B46Qtc)
24.3611, Interpellation Cottier, June 13, 2024, Artificial Intelligence. Administrative coordination and intentions regarding the new Council of Europe Framework Convention (https://dtn.re/hdDPxQ)
24.3616, Interpellation Gössi, June 13, 2024, Media and artificial intelligence (https://dtn.re/JaEh4n)
24.3415, Interpellation Tschopp, April 17, 2024, Platforms and AI: Users’ rights (https://dtn.re/HBZFOE)
24.3363, Motion Chappuis, March 15, 2024, For a sovereign digital infrastructure in Switzerland in the age of artificial intelligence (https://dtn.re/s4SsC9)
24.3346, Interpellation DocourtMarch 15, 2024, EU directive on platform work. Does Switzerland want to follow suit? (https://dtn.re/UNvBOq)
24.3235, Interpellation Marti, March 14, 2024, Artificial intelligence and the impact on copyright (https://dtn.re/jpX0Cg)
24.3209, Motion Juillard, March 14, 2024, For a sovereign digital infrastructure in Switzerland in the age of artificial intelligence (AI) (https://dtn.re/NsqdKN)
23.4517, Interpellation Gugger, December 22, 2023, Artificial intelligence and participation. Are there gaps in the law? (https://dtn.re/hl1Q54)
23.4492, Motion Gysi, December 22, 2023, Artificial intelligence in the workplace. Strengthening the participation rights of employees (https://dtn.re/PH8ab1)
23.4051, Interpellation Schlatter, September 29, 2023, Artificial intelligence and robotics. Ethics belongs in education! (https://dtn.re/PMNgtC)
23.393, Interpellation CottierJune 16, 2023, Artificial intelligence. What framework conditions need to be created to make the most of it and avoid undesirable developments? (https://dtn.re/FXxB9v)
23.3812, Interpellation Widmer, June 15, 2023, Artificial Intelligence. Dangers and potentials for democracy (https://dtn.re/ZkaTUc)
23.4133, Interpellation Marti, September 28, 2023, Algorithmic discrimination. Is the legal protection against discrimination sufficient? (https://dtn.re/xr97Zq)
23.3849, Motion Bendahan, June 15, 2023, Create a competence center or competence network for artificial intelligence in Switzerland (https://dtn.re/sqLWYa)
23.3654, Interpellation Riniker, June 13, 2023, Switzerland’s role in international cooperation in the field of artificial intelligence (https://dtn.re/sUoUb3)
23.3806, Motion Marti, June 15, 2023, Declaration obligation for artificial intelligence applications and automated decision-making systems (https://dtn.re/D3FmNo)
23.3563, Motion Mahaim, May 4, 2023, regulate deepfakes (https://dtn.re/kwNWvh)
23.3516, Interpellation Feller, May 2, 2023, General or temporary ban on certain artificial intelligence platforms (https://dtn.re/Ig8JPJ)
23.3201, Postulate Dobler, March 16, 2023, Legal situation of artificial intelligence. Clarify uncertainties, promote innovation! (https://dtn.re/e7sGlM)
23.3147, Interpellation Bendahan, March 14, 2023, Regulation of artificial intelligence in Switzerland (https://dtn.re/xMVLIE)
21.4406, Postulate Marti, December 9, 2021, Report on the regulation of automated decision-making systems (https://dtn.re/PQbXqs)
21.3206, Interpellation Pointet, March 17, 2021, Which state processes rely on artificial intelligence? (https://dtn.re/WUw9Hr)
21.3012, Postulate Security Policy Commission, January 15, 2021, Clear rules for autonomous weapons and artificial intelligence (https://dtn.re/duRhvk)
19.3919, Interpellation Riklin, June 21, 2019, Artificial intelligence and digital transformation. We need a holistic strategy (https://dtn.re/5x93tL)
Of course, the otherwise applicable provisions also apply to the use of AI. This applies, for example, to
the Data protection law (if personal data is processed during training or deployment),
the Secrecy law (if secret information is used for training or as input),
the Employment contract law (if personal data of applicants and employees are processed and if an AI affects the employer’s duty of care),
the public labor law (e.g. if obligations to cooperate apply or behavioral monitoring is under discussion),
the Personal rights (e.g. when conversations or team calls are recorded),
the Fair trading law (when AI-generated content can be misleading),
the Copyright (e.g. when an AI is trained with works or works are used as input, and when the protection of output is under discussion),
the Criminal Law (for recordings of non-public conversations or generally when using AI for criminal behavior),
Other areas of law.
Sectoral regulations may also be affected. A few supervisory authorities have already formulated expectations, in particular FINMA (https://dtn.re/bOT1Ez).
Private actors have also adopted rules in the meantime. This applies above all to particularly exposed players such as
media (the SRG Journalistic Guidelines (https://dtn.re/f1UTYZ),
political parties (e.g. with the AI Code of the Greens, the GLP, the SP, the Center Party and the EPP, https://dtn.re/1te4U8) or
research and education (e.g. with the recommendations for dealing with generative artificial intelligence at UZH, https://dtn.re/aBstLV).
Numerous private companies have also issued or are in the process of issuing guidelines, codes and instructions, some of which are public and some non-public.
| English | German | |
| 1 | AI system | AI system |
| 2 | Risk | Risk |
| 3 | Provider | Provider |
| 4 | Deployer | Operator |
| 5 | Authorized representative | Authorized representative |
| 6 | Importer | Importer |
| 7 | Distributor | Retailer |
| 8 | Operator | Actor |
| 9 | Placing on the market | Placing on the market |
| 10 | Making available on the market | Provision on the market |
| 11 | Putting into service | Commissioning |
| 12 | Intended purpose | Intended use |
| 13 | Reasonably foreseeable misuse | Reasonably foreseeable misuse |
| 14 | Safety component | Safety component |
| 15 | Instructions for use | Operating instructions |
| 16 | Recall of an AI system | Recall of an AI system |
| 17 | Withdrawal of an AI system | Withdrawal of an AI system |
| 18 | Performance of an AI system | Performance of an AI system |
| 19 | Notifying authority | Notifying authority |
| 20 | Conformity assessment | Conformity assessment |
| 21 | Conformity assessment body | Conformity assessment body |
| 22 | Notified body | Notified body |
| 23 | Substantial modification | Significant change |
| 24 | CE marking | CE marking |
| 25 | Post-market monitoring system | Post-market surveillance system |
| 26 | Market surveillance authority | Market surveillance authority |
| 27 | Harmonized standard | Harmonized standard |
| 28 | Common specification | Common specification |
| 29 | Training data | Training data |
| 30 | Validation data | Validation data |
| 31 | Validation data set | Validation data set |
| 32 | Testing data | Test data |
| 33 | Input data | Input data |
| 34 | Biometric data | Biometric data |
| 35 | Biometric identification | Biometric identification |
| 36 | Biometric verification | Biometric verification |
| 37 | Special categories of personal data | Special categories of personal data |
| 38 | Sensitive operational data | Sensitive operational data |
| 39 | Emotion recognition system | Emotion recognition system |
| 40 | Biometric categorization system | System for biometric categorization |
| 41 | Remote biometric identification system | Biometric remote identification system |
| 42 | Real-time remote biometric identification system | Biometric real-time remote identification system |
| 43 | Post-remote biometric identification system | System for subsequent biometric remote identification |
| 44 | Publicly accessible space | Publicly accessible space |
| 45 | Law enforcement authority | Law enforcement agency |
| 46 | Law enforcement | Prosecution |
| 47 | AI Office | Office for Artificial Intelligence |
| 48 | National competent authority | Competent national authority |
| 49 | Serious incident | Serious incident |
| 50 | Personal data | Personal data |
| 51 | Non-personal data | Non-personal data |
| 52 | Profiling | Profiling |
| 53 | Real-world testing plan | Plan for a test under real conditions |
| 54 | Sandbox plan | Plan for the real laboratory |
| 55 | AI regulatory sandbox | AI real laboratory |
| 56 | AI literacy | AI competence |
| 57 | Testing in real-world conditions | Test under real conditions |
| 58 | Subject | Test participant |
| 59 | Informed consent | Informed consent |
| 60 | Deep fake | Deepfake |
| 61 | Widespread infringement | Widespread violation |
| 62 | Critical infrastructure | Critical infrastructures |
| 63 | General-purpose AI model | AI model with general purpose |
| 64 | High-impact capabilities | Skills with high impact |
| 65 | Systemic risk | Systemic risk |
| 66 | General-purpose AI system | AI system with general purpose |
| 67 | Floating-point operation | Floating point operation |
| 68 | Downstream provider | Downstream provider |