Wed May 15 2024

What is the EU AI Act and how does it regulate AI systems?

What is the EU AI Act and how does it regulate AI systems?

Artificial intelligence (AI) is already present in a large number of processes and products today. Because of the widespread use, trust in the transparency, reliability and fairness of AI is crucial. So crucial, in fact, that the European Union has drafted the first ever legal framework for the regulation of AI: the EU AI Act.

The historical resolution is the first set of legally binding rules for AI systems and aims to protect EU citizens while also promoting innovation. To achieve that, the EU AI Act calls for governance throughout the entire life cycle of AI systems that makes related risks manageable. With the right approach, the implementation of the AI Act does not only present challenges, but important opportunities for organizations to have a pioneering role in the worldwide digital transformation. Following the EU AI Act, they can demonstrate social responsibility while also improving their AI quality.

AI Act: risk-based legislation

Most of the innovative current AI tools are perfectly safe to use and already create many benefits for their users. However, to keep it that way and protect users from potentially dangerous systems, regular risk assessments are necessary. The EU AI rules establish obligations for users and providers depending on the assessed risk-level of the AI system.

Unacceptable risk

AI systems that are considered a threat to people fall under the category of unacceptable risks and will be banned. Examples are:

  • cognitive behavioral manipulation systems (i.e. voice activated toys that encourage dangerous behavior in kids)
  • social scoring systems that classify people based on socio-economic status, personal characteristics, or behavior
  • remote and real time biometric identification systems

However, some exceptions may be made for the use of AI identification systems for law enforcement purposes.

High risk

Any AI system that could negatively affect fundamental rights or safety are considered “high risk systems” under the EU AI Act and will be assessed before being put on the market. Throughout their life cycle, regular risk assessments are mandatory. The EU AI Act also implements the right to file complaints against AI systems to a designated national authority.

EU copyright laws and transparency requirements

While Generative AI will not be considered a high-risk-system, it will still have to comply with the EU copyright law and transparency requirements such as:

  • disclosing if content was generated by AI
  • designing the AI Model in a way that prevents it from generating illegal content
  • mandatory publishing a summary of copyrighted data used for training

That means any content that is either modified or generated by AI systems – such as audio, video or images (i.e. deepfake files) – have to be clearly labelled so that potential users are aware that they have come across AI generated content.

High-impact general-purpose AI models (like ChatGPT 4) that could pose a systemic risk will have to undergo thorough evaluations. Serious incidents will have to be reported to the European Commission.

Promoting innovation

The law is not intended to stifle innovation, but rather to regulate it. For that purpose, the AI Act also requires national authorities to provide a testing environment in which small and medium-sized enterprises and start-ups have the opportunity to safely develop and train Ai models before they are released to the general public.

We use cookies to improve your experience on our site and to show you personalised advertising. Please read our cookie policy and privacy policy.