The EU Commission has approved guidelines on the concept of AI systems (AIS), even if they have not yet been formally adopted (“Commission Guidelines on the definition of an artificial intelligence system established by Regulation (EU) 2024/1689 (AI Act)„):
The 12-page guidelines – which refer to the Guidelines on prohibited practices follow – attempt to clarify the term AIS, which is ultimately not defined in the AI Act but only illustrated. General-purpose AI models (GPAIM) are not addressed.
According to Art. 3(1) AI Act, an AIS is a machine-based system that is intended to operate with “varying degrees of autonomy” and, once deployed, may demonstrate adaptability (“may”), and that derives from inputs for explicit or implicit goals how it can generate outputs such as predictions, content, recommendations or decisions, where these outputs “may affect physical or virtual environments”.
According to the Commission, this definition includes seven elementswhich, however, do not all necessarily have to be present and which overlap:
- Machine-supported system
- Gradual autonomy
- Adaptability
- Task orientation of the system (“goals”)
- Inference
- Output: Predictions, content, recommendations or decisions
- Influence on the environment
The fact that the definition in Art. 3 (1) AI Act does not allow for a clear demarcation has been explained in our FAQ on the AI Act to explain; this realization is new but not. The EU Commission, which is supposed to issue guidelines “for the practical implementation” of the AI Act in accordance with Art. 96 of the AI Act, is therefore attempting to clarify the term. However, it only succeeds to some extent with a kind of de minimis threshold, which is not very tangible. In the summary of the guidelines, the Commission states this failure with frustrating clarity:
No automatic determination or exhaustive lists of systems that either fall within or outside the definition of an AI system are possible.
(1) Machine-based system: any computer
According to the Commission, this refers to a system that runs on hardware and software – i.e. a computer.
(2) Autonomy: “Some reasonable degree of independence”
The autonomy of the system is the core of the definition. The reference in Art. 3(1) AI Act that there are degrees of autonomy is not very helpful – on the contrary, this would in itself also cover an autonomy of 1%. According to Recital 12, however, it is at least a matter of a certain Independence from human influence.
The Commission clarifies that the criterion of autonomy and the derivation of output are related because autonomy refers to this derivation. Accordingly, it is correct to assume one and not two criteria, but the Commission does not make this clear.
More important is the question of how autonomy is to be determined and what degree is necessary. It is clear that a system no AIS is when it completely controlled by one person is:
…excludes systems that are designed to operate solely with full manual human involvement and intervention. Human involvement and human intervention can be either direct, e.g. through manual controls, or indirect, e.g. though automated systems-based controls which allow humans to delegate or supervise system operations.
The Commission does not answer at this point which autonomy is required; however, it returns to the same question in the inference (see below):
All systems that are designed to operate with some reasonable degree of independence of actions fulfill the condition of autonomy in the definition of an AI system.
(3) Adaptability: not required
According to Recital 12, adaptability refers to an ability to learn, i.e. an adaptation to the environment that can change the output. Either way, adaptability is not part of the definition because it is a Optional and not a must criterion is:
The use of the term ‘may’ in relation to this element of the definition indicates that a system may, but does not necessarily have to, possess adaptiveness or self-learning capabilities after deployment to constitute an AI system. Accordingly, a system’s ability to automatically learn […] is a facultative and thus not a decisive condition […].
(4) Task orientation
An AIS must generate an output from the input “for explicit or implicit objectives”. The Commission understands the objective more as an illustration and only clarifies what is meant by an objective in this sense
Explicit objectives refer to clearly stated goals that are directly encoded by the developer into the system. For example, they may be specified as the optimization of some cost function, a probability, or a cumulative reward.
Implicit objectives refer to goals that are not explicitly stated but may be deduced from the behavior or underlying assumptions of the system. These objectives may arise from the training data or from the interaction of the AI system with its environment.
The “intended use”, which Art. 3(12) AI Act defines as the use intended by the provider, is not the same.
(5) Derivation of output: Minor threshold (!)
This is where the Commission apparently sees the essential demarcation in the AIS, and as mentioned, this element can be read together with autonomy.
According to the Commission, derivation – inference – is not just about deriving output, but rather about the design of the AIS – it must be built in such a way that it is technically enabled tooutput:
The terms ‘infer how to’, used in Article 3(1) and clarified in recital 12 AI Act, is broader than, and not limited only to, a narrow understanding of the concept of inference as an ability of a system to derive outputs from given inputs, and thus infer the result. Accordingly, the formulation used in Article 3(1) AI Act, i.e. ‘infers, how to generate outputs’, should be understood as referring to the building phase, whereby a system derives outputs through AI techniques enabling inferencing.
[…]Focusing specifically on the building phase of the AI system, recital 12 AI Act further clarifies that ‘[t]he techniques that enable inference while building an AI system include
[…].This clarification explicitly underlines that the concept of ‘inference’ should be understood in a broader sense as encompassing the ‘building phase’ of the AI system.
[…]
On this basis and that of Recital 12, the Commission takes a closer look at the relevant technologies:
- Machine Learning (ML) as a generic term;
- Supervised Learning: The system learns to recognize and generalize patterns from annotated data (Ex.: spam filter, classification of images, fraud detection);
- Unsupervised Learning: The system learns to recognize patterns in non-annotated data (Ex.: Research into new active ingredients in the pharmaceutical industry;
- Self-supervised learning: A use case of unsupervised learning where the system itself creates annotations or defines goals (Ex.: image recognition, LLMs);
- Reinforcement LearningLearning through experience via a reward function (Ex.: a robot learns to grasp objects; recommendation functions in search engines; autonomous driving);
- Deep learningLearning with neural networks usually based on large amounts of data;
- Logic- and knowledge-based approachesDeductive or inductive derivation from coded knowledge via logic, defined rules or ontologies (Ex.: Classical language models based on grammar and semantics, early expert systems for medical diagnostics).
So what is not an AIS? Recital 12:
… the definition should be based on key characteristics of AI systems that distinguish AI systems from simpler traditional software systems or programming approaches and should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations.
For the Commission – and this is the real point of the guidelines – there are systems that are capable of some inference but are not AIS:
Some systems have the capacity to infer in a narrow manner but may nevertheless fall outside of the scope of the AI system definition because of their limited capacity to analyze patterns and adjust autonomously their output.
Although this probably contradicts Recital 12, it is welcome because AIS can only be meaningfully distinguished from other systems using quantitative criteria. The Commission includes the following among the exempted systems – the category of simple forecasting models is particularly interesting:
- “Systems for improving mathematical optimization“This applies, for example, to statistical regression analyses (→ FAQ AI Act):
This is because, while those models have the capacity to infer, they do not transcend ‘basic data processing’.
Examples include
- Methods used for years (depending on the individual case, but the long period of use is an indication) that only optimize a known algorithm by shifting functions or parameters, such as “physics-based systems”, which improve computing power, e.g. for prognostic purposes;
- a system that improves the use of bandwidth or resources in a satellite-based communication system.
In contrast, systems that allow “adjustments of their decision making models in an intelligent way” remain covered. So if a share price forecast works with a regression model that adjusts during operation, it is not a pure regression and would be covered. The same would have to apply to a recommendation system whose parameters can adapt.
- „Basic data processing”: Predefined data processing that processes input according to fixed rules and without ML or other inference (e.g. a filter function in a database) and does not learn or think. This also includes, for example, systems that only visualize data using statistical methods or a statistical evaluation of surveys.
- „Systems based on classical heuristics”: The aim is to find an optimal solution to a problem, e.g. through rules, pattern recognition or trial-and-error. In contrast to ML, such systems apply defined rules, e.g. a chess program that uses a “Minimax” algorithm, and can neither adapt nor generalize.
- „Simple prediction systems”: These are systems that work with simple statistics, even if they technically use ML. They are not AIS “due to their performance” – however this is to be quantified. Examples are
- Financial forecasts that predict share prices based on the average historical price,
- a temperature forecast based on historical measured values,
- Estimation systems such as a customer service system that estimates response times or sales forecasts.
(6) Predictions, content, recommendations or decisions
According to Art. 3(1) AI Act, AIS and output can generate “predictions, content, recommendations or decisions” that “may affect physical or virtual environments”. The guidelines first address the types of output, but contain nothing more than general descriptions that do not contribute to an understanding of the term.
Nevertheless: An indication of an AIS can probably be the Complexity of the output but this criterion is likely to coincide with that of autonomy and inference:
AI systems can generally generate more nuanced outputs than other systems, for example, by leveraging patterns learned during training or by using expert-defined rules to make decisions, offering more sophisticated reasoning in structured environments.
(7) Influence on the environment
The fact that the content can influence the environment is mentioned by the Commission as a further conceptual element. However, this does not differentiate it from other systems.