Draft regulation
In April 2021, the European Commission had published a draft regulation establishing harmonized rules for artificial intelligence (AI Regulation). The draft is currently being discussed in the European Parliament.
Scope
The regulation regulates AI systems. On November 29, 2021, the Slovenian presidency adopted a Compromise text and, among other things, supplemented the subject matter and scope of application and adjusted definitions. In particular, the term “AI system” was changed. It is not decisive whether a system functions autonomously or as a component of a product. An AI system is now defined as:
‘artificial intelligence system’ (AI system) means a system that
(i) receives machine and/or human-based data and inputs,
(ii) infers how to achieve a given set of human-defined objectives using learning, reasoning or modelling implemented with the techniques and approaches listed in Annex I, and
(iii) generates outputs in the form of content (generative AI systems), predictions, recommendations or decisions, which influence the environments it interacts with;
Annex I contains a List of detected AI technology:
(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
(b) Logic- and knowledge-based approachesincluding knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
(c) Statistical approaches, Bayesian estimation, search and optimization methods.
In spatial aspect the AI Regulation applies not only to users and providers established in the EU, but also to the placing on the market in the EU of AI systems or products using an AI system, and to providers and users of AI systems to the extent that the results generated by those systems are used in the EU. Recitals 10 and 11 provide further details in this regard.
Prohibited practices (blacklist)
The regulation provides for a Prohibit certain particularly risky or ethically questionable uses of artificial intelligence before (“prohibited practices”), e.g. their use
- to an influence that acts unconsciously,
- concerns persons who are particularly vulnerable,
- can be used to classify the trustworthiness of individuals in connection with social affiliation or social behavior (social scoring).
To this extent, not only is the use of AI systems prohibited, but already the placing of corresponding systems on the market. This is another example of the Apron protection in data lawwhich can also be observed elsewhere (e.g., in the obligation to conduct impact assessments, in the principle of privacy by design, and in the fiction of a violation of privacy even in the case of negligible violations of processing principles).
High-risk systems
The regulation also requires, under certain conditions, a Classification of AI systems as high-risk systems. (according to Annex III of the Regulation), which leads to special requirements for the system itself. In this context, systems are classified as high-risk on the one hand due to the technology used and on the other hand due to their use in certain sectors. For example, AI systems are considered high-risk systems if they are used in the following areas:
- consent-less biometric identification (one can imagine here the importance that the effectiveness of consents to biometric identifications takes on when they are based on AI);
- safety component in traffic control, energy supply, digital infrastructure or emission control;
- Access testing for exams;
- Application exams;
- Control access to certain services;
- Law enforcement.
Such systems must undergo conformity assessment procedures before they can be placed on the market in the EU. Similarly, higher requirements apply to documentation and information to users, and “human oversight” must be ensured – another example of a mandatory human element in automated systems, reminiscent of data subject rights in automated individual decisions. All members of the value chain – suppliers, importers, distributors, and users – of high-risk systems also have specific obligations, including market monitoring requirements.
Authorities and sanctions
In addition, authorities are created:
- Member States must establish competent national authorities or designate existing authorities;
- a European Committee for Artificial Intelligence created to advise the Commission of the EU.
Compliance is ensured by Sanctions, which in extreme cases can go up to 6% of sales or EUR 30 million.