The EU Com­mis­si­on has appro­ved gui­de­lines on the con­cept of AI systems (AIS), even if they have not yet been for­mal­ly adopted (“Com­mis­si­on Gui­de­lines on the defi­ni­ti­on of an arti­fi­ci­al intel­li­gence system estab­lished by Regu­la­ti­on (EU) 2024/1689 (AI Act)„):

The 12-page gui­de­lines – which refer to the Gui­de­lines on pro­hi­bi­ted prac­ti­ces fol­low – attempt to cla­ri­fy the term AIS, which is ulti­m­ate­ly not defi­ned in the AI Act but only illu­stra­ted. Gene­ral-pur­po­se AI models (GPAIM) are not addressed.

Accor­ding to Art. 3(1) AI Act, an AIS is a machi­ne-based system that is inten­ded to ope­ra­te with “vary­ing degrees of auto­no­my” and, once deployed, may demon­stra­te adap­ta­bi­li­ty (“may”), and that deri­ves from inputs for expli­cit or impli­cit goals how it can gene­ra­te out­puts such as pre­dic­tions, con­tent, recom­men­da­ti­ons or decis­i­ons, whe­re the­se out­puts “may affect phy­si­cal or vir­tu­al environments”.

Accor­ding to the Com­mis­si­on, this defi­ni­ti­on inclu­des seven ele­mentswhich, howe­ver, do not all neces­s­a­ri­ly have to be pre­sent and which overlap:

  1. Machi­ne-sup­port­ed system
  2. Gra­du­al autonomy
  3. Adap­ta­bi­li­ty
  4. Task ori­en­ta­ti­on of the system (“goals”)
  5. Infe­rence
  6. Out­put: Pre­dic­tions, con­tent, recom­men­da­ti­ons or decisions
  7. Influence on the environment

The fact that the defi­ni­ti­on in Art. 3 (1) AI Act does not allow for a clear demar­ca­ti­on has been explai­ned in our FAQ on the AI Act to explain; this rea­lizati­on is new but not. The EU Com­mis­si­on, which is sup­po­sed to issue gui­de­lines “for the prac­ti­cal imple­men­ta­ti­on” of the AI Act in accordance with Art. 96 of the AI Act, is the­r­e­fo­re attemp­ting to cla­ri­fy the term. Howe­ver, it only suc­ce­eds to some ext­ent with a kind of de mini­mis thres­hold, which is not very tan­gi­ble. In the sum­ma­ry of the gui­de­lines, the Com­mis­si­on sta­tes this fail­ure with fru­st­ra­ting clarity:

No auto­ma­tic deter­mi­na­ti­on or exhaus­ti­ve lists of systems that eit­her fall within or out­side the defi­ni­ti­on of an AI system are possible.

(1) Machi­ne-based system: any computer

Accor­ding to the Com­mis­si­on, this refers to a system that runs on hard­ware and soft­ware – i.e. a computer.

(2) Auto­no­my: “Some rea­sonable degree of independence”

The auto­no­my of the system is the core of the defi­ni­ti­on. The refe­rence in Art. 3(1) AI Act that the­re are degrees of auto­no­my is not very hel­pful – on the con­tra­ry, this would in its­elf also cover an auto­no­my of 1%. Accor­ding to Reci­tal 12, howe­ver, it is at least a mat­ter of a cer­tain Inde­pen­dence from human influence.

The Com­mis­si­on cla­ri­fi­es that the cri­ter­ion of auto­no­my and the deri­va­ti­on of out­put are rela­ted becau­se auto­no­my refers to this deri­va­ti­on. Accor­din­gly, it is cor­rect to assu­me one and not two cri­te­ria, but the Com­mis­si­on does not make this clear.

More important is the que­sti­on of how auto­no­my is to be deter­mi­ned and what degree is neces­sa­ry. It is clear that a system no AIS is when it com­ple­te­ly con­trol­led by one per­son is:

…exclu­des systems that are desi­gned to ope­ra­te sole­ly with full manu­al human invol­vement and inter­ven­ti­on. Human invol­vement and human inter­ven­ti­on can be eit­her direct, e.g. through manu­al con­trols, or indi­rect, e.g. though auto­ma­ted systems-based con­trols which allow humans to dele­ga­te or super­vi­se system operations.

The Com­mis­si­on does not ans­wer at this point which auto­no­my is requi­red; howe­ver, it returns to the same que­sti­on in the infe­rence (see below):

All systems that are desi­gned to ope­ra­te with some rea­sonable degree of inde­pen­dence of actions ful­fill the con­di­ti­on of auto­no­my in the defi­ni­ti­on of an AI system.

(3) Adap­ta­bi­li­ty: not required

Accor­ding to Reci­tal 12, adap­ta­bi­li­ty refers to an abili­ty to learn, i.e. an adap­t­ati­on to the envi­ron­ment that can chan­ge the out­put. Eit­her way, adap­ta­bi­li­ty is not part of the defi­ni­ti­on becau­se it is a Optio­nal and not a must cri­ter­ion is:

The use of the term ‘may’ in rela­ti­on to this ele­ment of the defi­ni­ti­on indi­ca­tes that a system may, but does not neces­s­a­ri­ly have to, pos­sess adap­ti­ve­ness or self-lear­ning capa­bi­li­ties after deployment to con­sti­tu­te an AI system. Accor­din­gly, a system’s abili­ty to auto­ma­ti­cal­ly learn […] is a facul­ta­ti­ve and thus not a decisi­ve con­di­ti­on […].

(4) Task orientation

An AIS must gene­ra­te an out­put from the input “for expli­cit or impli­cit objec­ti­ves”. The Com­mis­si­on under­stands the objec­ti­ve more as an illu­stra­ti­on and only cla­ri­fi­es what is meant by an objec­ti­ve in this sense

Expli­cit objec­ti­ves refer to cle­ar­ly sta­ted goals that are direct­ly encoded by the deve­lo­per into the system. For exam­p­le, they may be spe­ci­fi­ed as the opti­mizati­on of some cost func­tion, a pro­ba­bi­li­ty, or a cumu­la­ti­ve reward.

Impli­cit objec­ti­ves refer to goals that are not expli­ci­t­ly sta­ted but may be dedu­ced from the beha­vi­or or under­ly­ing assump­ti­ons of the system. The­se objec­ti­ves may ari­se from the trai­ning data or from the inter­ac­tion of the AI system with its environment.

The “inten­ded use”, which Art. 3(12) AI Act defi­nes as the use inten­ded by the pro­vi­der, is not the same.

(5) Deri­va­ti­on of out­put: Minor threshold (!)

This is whe­re the Com­mis­si­on appar­ent­ly sees the essen­ti­al demar­ca­ti­on in the AIS, and as men­tio­ned, this ele­ment can be read tog­e­ther with autonomy.

Accor­ding to the Com­mis­si­on, deri­va­ti­on – infe­rence – is not just about deri­ving out­put, but rather about the design of the AIS – it must be built in such a way that it is tech­ni­cal­ly enab­led toout­put:

The terms ‘infer how to’, used in Artic­le 3(1) and cla­ri­fi­ed in reci­tal 12 AI Act, is broa­der than, and not limi­t­ed only to, a nar­row under­stan­ding of the con­cept of infe­rence as an abili­ty of a system to deri­ve out­puts from given inputs, and thus infer the result. Accor­din­gly, the for­mu­la­ti­on used in Artic­le 3(1) AI Act, i.e. ‘infers, how to gene­ra­te out­puts’, should be under­s­tood as refer­ring to the buil­ding pha­se, wher­eby a system deri­ves out­puts through AI tech­ni­ques enab­ling inferencing.

[…]

Focu­sing spe­ci­fi­cal­ly on the buil­ding pha­se of the AI system, reci­tal 12 AI Act fur­ther cla­ri­fi­es that ‘[t]he tech­ni­ques that enable infe­rence while buil­ding an AI system include
[…].

This cla­ri­fi­ca­ti­on expli­ci­t­ly under­lines that the con­cept of ‘infe­rence’ should be under­s­tood in a broa­der sen­se as encom­pas­sing the ‘buil­ding pha­se’ of the AI system.
[…]

On this basis and that of Reci­tal 12, the Com­mis­si­on takes a clo­ser look at the rele­vant technologies:

  • Machi­ne Lear­ning (ML) as a gene­ric term;
  • Super­vi­sed Lear­ning: The system lear­ns to reco­gnize and gene­ra­li­ze pat­terns from anno­ta­ted data (Ex.: spam fil­ter, clas­si­fi­ca­ti­on of images, fraud detection);
  • Unsu­per­vi­sed Lear­ning: The system lear­ns to reco­gnize pat­terns in non-anno­ta­ted data (Ex.: Rese­arch into new acti­ve ingre­di­ents in the phar­maceu­ti­cal industry;
  • Self-super­vi­sed lear­ning: A use case of unsu­per­vi­sed lear­ning whe­re the system its­elf crea­tes anno­ta­ti­ons or defi­nes goals (Ex.: image reco­gni­ti­on, LLMs);
  • Rein­force­ment Lear­ningLear­ning through expe­ri­ence via a reward func­tion (Ex.: a robot lear­ns to grasp objects; recom­men­da­ti­on func­tions in search engi­nes; auto­no­mous driving);
  • Deep lear­ningLear­ning with neu­ral net­works usual­ly based on lar­ge amounts of data;
  • Logic- and know­ledge-based approa­chesDeduc­ti­ve or induc­ti­ve deri­va­ti­on from coded know­ledge via logic, defi­ned rules or onto­lo­gies (Ex.: Clas­si­cal lan­guage models based on grammar and seman­tics, ear­ly expert systems for medi­cal diagnostics).

So what is not an AIS? Reci­tal 12:

… the defi­ni­ti­on should be based on key cha­rac­te­ri­stics of AI systems that distin­gu­ish AI systems from simp­ler tra­di­tio­nal soft­ware systems or pro­gramming approa­ches and should not cover systems that are based on the rules defi­ned sole­ly by natu­ral per­sons to auto­ma­ti­cal­ly exe­cu­te operations.

For the Com­mis­si­on – and this is the real point of the gui­de­lines – the­re are systems that are capa­ble of some infe­rence but are not AIS:

Some systems have the capa­ci­ty to infer in a nar­row man­ner but may nevert­hel­ess fall out­side of the scope of the AI system defi­ni­ti­on becau­se of their limi­t­ed capa­ci­ty to ana­ly­ze pat­terns and adjust auto­no­mously their output.

Alt­hough this pro­ba­b­ly con­tra­dicts Reci­tal 12, it is wel­co­me becau­se AIS can only be meaningful­ly distin­gu­is­hed from other systems using quan­ti­ta­ti­ve cri­te­ria. The Com­mis­si­on inclu­des the fol­lo­wing among the exempt­ed systems – the cate­go­ry of simp­le fore­ca­sting models is par­ti­cu­lar­ly interesting:

  • Systems for impro­ving mathe­ma­ti­cal opti­mizati­on“This applies, for exam­p­le, to sta­tis­ti­cal regres­si­on ana­ly­ses (→ FAQ AI Act):

    This is becau­se, while tho­se models have the capa­ci­ty to infer, they do not tran­s­cend ‘basic data processing’.

    Examp­les include

    • Methods used for years (depen­ding on the indi­vi­du­al case, but the long peri­od of use is an indi­ca­ti­on) that only opti­mi­ze a known algo­rithm by shif­ting func­tions or para­me­ters, such as “phy­sics-based systems”, which impro­ve com­pu­ting power, e.g. for pro­gno­stic purposes;
    • a system that impro­ves the use of band­width or resour­ces in a satel­li­te-based com­mu­ni­ca­ti­on system.

    In con­trast, systems that allow “adjust­ments of their decis­i­on making models in an intel­li­gent way” remain cover­ed. So if a share pri­ce fore­cast works with a regres­si­on model that adjusts during ope­ra­ti­on, it is not a pure regres­si­on and would be cover­ed. The same would have to app­ly to a recom­men­da­ti­on system who­se para­me­ters can adapt.

  • Basic data pro­ce­s­sing”: Pre­de­fi­ned data pro­ce­s­sing that pro­ce­s­ses input accor­ding to fixed rules and wit­hout ML or other infe­rence (e.g. a fil­ter func­tion in a data­ba­se) and does not learn or think. This also inclu­des, for exam­p­le, systems that only visua­li­ze data using sta­tis­ti­cal methods or a sta­tis­ti­cal eva­lua­ti­on of surveys.
  • Systems based on clas­si­cal heu­ri­stics”: The aim is to find an opti­mal solu­ti­on to a pro­blem, e.g. through rules, pat­tern reco­gni­ti­on or tri­al-and-error. In con­trast to ML, such systems app­ly defi­ned rules, e.g. a chess pro­gram that uses a “Mini­max” algo­rithm, and can neither adapt nor generalize.
  • Simp­le pre­dic­tion systems”: The­se are systems that work with simp­le sta­tis­tics, even if they tech­ni­cal­ly use ML. They are not AIS “due to their per­for­mance” – howe­ver this is to be quan­ti­fi­ed. Examp­les are 
    • Finan­cial fore­casts that pre­dict share pri­ces based on the avera­ge histo­ri­cal price,
    • a tem­pe­ra­tu­re fore­cast based on histo­ri­cal mea­su­red values,
    • Esti­ma­ti­on systems such as a cus­to­mer ser­vice system that esti­ma­tes respon­se times or sales forecasts.

(6) Pre­dic­tions, con­tent, recom­men­da­ti­ons or decisions

Accor­ding to Art. 3(1) AI Act, AIS and out­put can gene­ra­te “pre­dic­tions, con­tent, recom­men­da­ti­ons or decis­i­ons” that “may affect phy­si­cal or vir­tu­al envi­ron­ments”. The gui­de­lines first address the types of out­put, but con­tain not­hing more than gene­ral descrip­ti­ons that do not con­tri­bu­te to an under­stan­ding of the term.

Nevert­hel­ess: An indi­ca­ti­on of an AIS can pro­ba­b­ly be the Com­ple­xi­ty of the out­put but this cri­ter­ion is likely to coin­ci­de with that of auto­no­my and inference:

AI systems can gene­ral­ly gene­ra­te more nuan­ced out­puts than other systems, for exam­p­le, by lever­aging pat­terns lear­ned during trai­ning or by using expert-defi­ned rules to make decis­i­ons, offe­ring more sophi­sti­ca­ted rea­so­ning in struc­tu­red environments.

(7) Influence on the environment

The fact that the con­tent can influence the envi­ron­ment is men­tio­ned by the Com­mis­si­on as a fur­ther con­cep­tu­al ele­ment. Howe­ver, this does not dif­fe­ren­tia­te it from other systems.