The AI Act: The EU Pioneer Regulation of AI

By 24.04.2024 NEWS
EU AI Regulation

The European Parliament approved on 13 March 2024 the “AI Act”, being thus the first important regulation of AI worldwide. There are still some legal steps until coming into force in the EU (which you will find at the end of this article), but it is worth getting acquainted by now with the most important provisions1

The Regulation comes with uniform rules for the use of AI systems on the Internal market, to ensure that these systems have human supervision and that their usage will be secure in all situations, as well as in compliance with human fundamental rights. 

I particularly liked the following statements from the Recital 6 of the AI Act, and I think that they should be the basis for any future amendments that this Regulation shall have, on the way of the AI systems development: 

As a pre-requisite, AI should be a human-centric technology. It should serve as a tool for people, with the ultimate aim of increasing human well-being”.

The Regulation is proposing the following definition of the AI system: a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

There are opinions2 that the definition above is too broad and that it can include every software in general, while the AI Act should focus only on high-risk AI systems and exclude certain categories of software. 

The AI Act is regulating more categories of AI systems, depending on the risk they involve: (i) some AI systems are forbidden as they encompass unacceptable risks; (ii) high-risk AI systems are subject to a wide range of requirements; (iii) AI systems presenting a limited risk must comply with less obligations.

Here are some examples of AI systems that shall be prohibited in the EU, because they create unacceptable risks

  • AI systems that deploy subliminal techniques beyond a person’s consciousness, or manipulative or deceptive techniques, with the objective, or the effect of, materially distorting the behavior of a person or a group of persons by impairing their ability to make informed decisions;
  • AI systems that exploit any of the vulnerabilities of a person or a specific group of persons due to their age, disability or a specific social or economic situation;
  • AI systems used for “social-scoring” purposes (e.g. by public authorities), for the evaluation or classification of natural persons based on their social behavior or known, inferred or predicted, personal characteristics;
  • AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the Internet or CCTV footage; 
  • Real-time remote biometric identification systems in public spaces for law enforcement purposes, except in a limited number of cases (e.g. the prevention of a specific and imminent threat of terrorist attack).

A substantial part of the AI Act is dedicated to the subject of the high-risk AI systems, detailing the conditions to put on the market and use such systems.

The high-risk AI systems, as identified by the AI Act, are the following: 

  • AI Systems, which are a safety component of a product, or AI systems, which are themselves a product falling under the EU health and safety harmonization legislation (e.g. aviation, cars, medical devices, toys, etc.).
  • AI Systems deployed in 8 specific areas detailed in Annex III of the AI Act (e.g. management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity; employment and workers management; law enforcement; administration of justice and democratic processes), which the Commission can update from time to time, by delegated acts.

High-risk AI systems will have to comply with a series of requirements related to: 

  • Implementing a risk management system throughout the entire life cycle of the AI system; 
  • Respecting certain criteria about the data sets used for training the AI system;
  • Record keeping: the AI system shall technically allow for the automatic recording of events (logs) over its lifetime;
  • Transparency: the operation of the AI system shall be sufficiently transparent to enable deployers to interpret a system’s output and use it in an appropriate manner;
  • Human oversight: the AI system shall have a human interface tool and, for the AI systems enumerated in Annex III of the Regulation, no action or decision is permitted to be taken by the deployer on the basis of the identification resulting from the system unless identification has been separately verified and confirmed by at least two persons;
  • Implementing a quality management system, including procedures related to the reporting of any serious incident;
  • Ensuring an adequate level of accuracy, robustness and cybersecurity, before being placed on the market or put into service.

Limited risk AI systems still have to comply with some transparency rules. For instance, AI systems intended to interact directly with people have to be designed in such a way that the persons are informed that they are interacting with an AI system. Or, AI systems (including the general purpose AI systems) generating synthetic audio, image, video or text content (the so-called “deep fake”), shall clearly mark these outputs as “artificially generated” or “artificially manipulated”.

An important chapter of the AI Act is dealing with the general-purpose AI models (GPAIs). OpenAI’s “ChatGPT” and Google’s “Gemini” are examples of GPAI. GPAIs are defined as AI models that display significant generality and are capable of performing a wide range of distinct tasks and can be integrated into a variety of systems or applications.

GPAIs are subject to certain transparency requirements. For instance, GPAIs must make public a detailed summary of the content used in training the GPAI models, according to a template which will be provided by the AI Office. All GPAI models will have to draw up and maintain up-to-date technical documentation to providers of AI systems who intend to integrate the general-purpose AI model into their own AI systems. All providers of GPAI models have to put in place a policy to respect the EU copyright law, including through advanced technologies (e.g. watermarking).  

Some GPAIs are classified as involving a systemic risk, namely (i) the ones having “high-impact capabilities”, due to their reach (e.g. by the number of registered users) or (ii) the ones which are trained using a total computing power exceeding 10^25 FLOPs (floating-point operations per second), a threshold that can be modified in the future. Having a systemic risk means to have a significant impact on the Internal market, or the society as a whole, due to their wide reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security or fundamental rights. Besides the requirements for all GPAIs, providers of systemic-risk GPAI models are obliged to continuously assess and reduce the risks, to ensure cybersecurity, as well as to report serious incidents and take remedial actions.

The Regulation is also establishing a new set of institutions which are dedicated to the supervision of the AI requirements: EU-level actors like the “AI Office” and the “European AI Board”, and also national actors, as each Member State must designate at least one market surveillance authority and at least one notifying authority. 

The “AI Office” is the European Artificial Intelligence Office established by the European Commission Decision of 24 January 2024, which will ensure implementation, monitoring and supervision of AI systems and AI governance in the EU, particularly on the general purpose AI models. The AI Office shall be responsible for drafting the Codes of practice related to the AI Act3

The “European AI Board” shall be composed of one representative per Member State, having the role to help the work of the AI Office and to coordinate among national competent authorities responsible for the application of this Regulation.

It is important to know that the rules established by this Regulation will apply to providers of AI systems irrespective of whether they are established within the European Union or in a third country. 

Interested about the fines that the AI Act infringements can generate? Infringements of rules related to prohibited AI practices are subject to administrative fines of up to 35 million EURO or, if the offender is an undertaking, up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher. For infringements related to the high-risk AI systems or GPAIs, the fines are up to 15 million EURO or, if the offender is an undertaking, up to 3 % of its total worldwide annual turnover for the preceding financial year, whichever is higher. The reference values are 7.5 million EURO or 1% for supplying of incorrect, incomplete or misleading information to notified bodies or national competent authorities in reply to a request. All the enumerated refrence values above are valid also for SMEs, including start-ups, but the alternative lower value shall be applied. 

To conclude, the level of the fines is very high so that to confer to the AI Act a very serious character for all the companies involved in the development and putting on the market of AI systems on the territory of the European Union.

The AI Act, approved at first reading on 13 March, 2024, must further be adopted in final version in plenary session of the European Parliament (possibly May 2024) and then be endorsed by the Council and published in the EU’s Official Journal before entering into force. 

The AI Act will enter into force 20 days after its publication and most of the provisions will apply after 24 months. The rules on prohibited AI systems will enter into force after 6 months, the rules on GPAIs after 12 months, and the rules on high-risk AI systems after 36 months. Special terms of coming into force (more favorable) are regulated for AI systems and GPAIs already placed on the EU market before the entry into force of the AI Act. In my opinion, all the terms for the entry into force of the AI Act are too long when considered in comparison with the potential massive speed of development of the AI models and systems. 

 

1 You can read the full text of the AI Act adopted by the European Parliament here.

2 See the Briefing of 11-03-2024 on the Artificial Intelligence Act.

3 Read more about the recruitment process for the AI Office here.

 

Author: Veronica Floroiu, Managing Associate, Milcev Burbea