Artboard 11

Artificial intelligence (AI) act: Council gives final green light to the first worldwide rules on AI

Today the Council approved a ground-breaking law aiming to harmonise rules on artificial intelligence, the so-called artificial intelligence act. The flagship legislation follows a ‘risk-based’ approach, which means the higher the risk to cause harm to society, the stricter the rules. It is the first of its kind in the world and can set a global standard for AI regulation.

The new law aims to foster the development and uptake of safe and trustworthy AI systems across the EU’s single market by both private and public actors. At the same time, it aims to ensure respect of fundamental rights of EU citizens and stimulate investment and innovation on artificial intelligence in Europe. The AI act applies only to areas within EU law and provides exemptions such as for systems used exclusively for military and defence as well as for research purposes.

The adoption of the AI act is a significant milestone for the European Union. This landmark law, the first of its kind in the world, addresses a global technological challenge that also creates opportunities for our societies and economies. With the AI act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation.

Mathieu Michel, Belgian secretary of state for digitisation, administrative simplification, privacy protection, and the building regulation

Classification of AI systems as high-risk and prohibited AI practices

The new law categorises different types of artificial intelligence according to risk. AI systems presenting only limited risk would be subject to very light transparency obligations, while high-risk AI systems would be authorised, but subject to a set of requirements and obligations to gain access to the EU market. AI systems such as, for example, cognitive behavioural manipulation and social scoring will be banned from the EU because their risk is deemed unacceptable. The law also prohibits the use of AI for predictive policing based on profiling and systems that use biometric data to categorise people according to specific categories such as race, religion, or sexual orientation. 

General purpose AI models

The AI act also addresses the use of general-purpose AI (GPAI) models.

GPAI models not posing systemic risks will be subject to some limited requirements, for example with regard to transparency, but those with systemic risks will have to comply with stricter rules. 

A new governance architecture

To ensure proper enforcement, several governing bodies are set up:

  • An AI Office within the Commission to enforce the common rules across the EU
  • A scientific panel of independent experts to support the enforcement activities
  • An AI Board with member states’ representatives to advise and assist the Commission and member states on consistent and effective application of the AI Act
  • An advisory forum for stakeholders to provide technical expertise to the AI Board and the Commission 

Penalties

The fines for infringements to the AI act are set as a percentage of the offending company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. SMEs and start-ups are subject to proportional administrative fines. 

Transparency and protection of fundamental rights

Before a high-risk AI system is deployed by some entities providing public services, the fundamental rights impact will need to be assessed. The regulation also provides for increased transparency regarding the development and use of high-risk AI systems. High-risk AI systems, as well as certain users of a high-risk AI system that are public entities will need to be registered in the EU database for high-risk AI systems, and users of an emotion recognition system will have to inform natural persons when they are being exposed to such a system.

Measures in support of innovation

The AI act provides for an innovation-friendly legal framework and aims to promote evidence-based regulatory learning. The new law foresees that AI regulatory sandboxes, enabling a controlled environment for the development, testing and validation of innovative AI systems, should also allow for testing of innovative AI systems in real world conditions. 

Next steps

After being signed by the presidents of the European Parliament and of the Council, the legislative act will be published in the EU’s Official Journal in the coming days and enter into force twenty days after this publication. The new regulation will apply two years after its entry into force, with some exceptions for specific provisions. 

Background

The AI act is a key element of the EU’s policy to foster the development and uptake across the single market of safe and lawful AI that respects fundamental rights. The Commission (Thierry Breton, commissioner for internal market) submitted the proposal for the AI act in April 2021. Brando Benifei (S&D / IT) and Dragoş Tudorache (Renew Europe / RO) were the European Parliament’s rapporteurs on this file and a provisional agreement between the co-legislators was reached on 8 December 2023.

Visit the meeting page

Press contacts

Dimosthenis Mammonas
Press officer
+32 477 61 20 7
+32 2 281 2504
@dimos_mammonas

If you are not a journalist, please send your request to the public information service.

The sender takes full responsibility for the content of this news item. Content may include forward-looking statements which, at the time they were made, were based on expectations of future events. Readers are cautioned not to rely on these forward-looking statements.

As a life sciences organization based in Vienna, would you like us to promote your news and events? If so, please send your contributions to news(at)lisavienna.at.