The European Commission has welcomed the political agreement reached last weekend between the European Parliament and the council on the Artificial Intelligence Act (AI Act), proposed by the commission in April 2021.

The draft act aims to ensure that AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values. The main idea is to regulate AI based on the its capacity to cause harm to society following a ‘risk-based’ approach: the higher the risk, the stricter the rules. It is the first legislative proposal of its kind in the world.

New rules

The new rules will be applied directly in the same way across all member states, based on a future-proof definition of AI. They follow a risk-based approach:

Minimal risk: The vast majority of AI systems fall into the category of minimal risk. Minimal risk applications such as AI-enabled recommender systems or spam filters will benefit from a free-pass and absence of obligations, as these systems present only minimal or no risk for citizens' rights or safety. On a voluntary basis, companies may nevertheless commit to additional codes of conduct for these AI systems.

High-risk: AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems.

Examples of such high-risk AI systems include certain critical infrastructures for instance in the fields of water, gas and electricity; medical devices; systems to determine access to educational institutions or for recruiting people; or certain systems used in the fields of law enforcement, border control, administration of justice and democratic processes. Moreover, biometric identification, categorisation and emotion recognition systems are also considered high-risk. 

Unacceptable risk: AI systems considered a clear threat to the fundamental rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users' free will, such as toys using voice assistance encouraging dangerous behaviour of minors or systems that allow ‘social scoring' by governments or companies, and certain applications of predictive policing. In addition, some uses of biometric systems will be prohibited, for example emotion recognition systems used at the workplace and some systems for categorising people or real time remote biometric identification for law enforcement purposes in publicly accessible spaces (with narrow exceptions).

Specifictransparency risk: When employing AI systems such as chatbots, users should be aware that they are interacting with a machine. Deep fakes and other AI generated content will have to be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.

Fines

Companies not complying with the rules will be fined. Fines would range from €35mor 7% of global annual turnover (whichever is higher) for violations of banned AI applications, €15m or 3% for violations of other obligations and €7.5m or 1.5% for supplying incorrect information. More proportionate caps are foreseen for administrative fines for SMEs and startups in case of infringements of the AI Act.

General purpose AI

The AI Act introduces dedicated rules for general purpose AI models that will ensure transparency along the value chain. For very powerful models that could pose systemic risks, there will be additional binding obligations related to managing risks and monitoring serious incidents, performing model evaluation and adversarial testing. These new obligations will be operationalised through codes of practices developed by industry, the scientific community, civil society and other stakeholders together with the commission.

In terms of governance, national competent market surveillance authorities will supervise the implementation of the new rules at national level, while the creation ofa new European AI Office within the European Commission will ensure coordination at European level.

The new AI Office will also supervise the implementation and enforcement of the new rules on general purpose AI models. Along with the national market surveillance authorities, the AI Office will be the first body globally that enforces binding rules on AI and is therefore expected to become an international reference point.

For general purpose models, a scientific panel of independent experts will play a central role by issuing alerts on systemic risks and contributing to classifying and testing the models.