On March 13th, 2024, the European Parliament approved the EU Artificial Intelligence (AI) Act, marking a significant milestone in AI governance. This comprehensive regulation aims to ensure responsible AI development, protect fundamental rights, and foster innovation, allowing stakeholders time to align with its requirements while promoting responsible AI development within the European Union.

Of course, this aspiration is generally true of AI governance initiatives being rolled out in both the public and private sectors. The words are familiar. The values are routinely espoused by a wide range of stakeholders.

But the AI Act is truly a pivotal milestone. In adopting the AI Act the EU has again taken a leadership role in technology regulation - staking out a reference point that, at least for the next few years, will frame the discussion of how the United States, other governments, and companies building and using AI models will consider AI governance and regulatory tools.

In adopting the AI Act the EU has again taken a leadership role in technology regulation - staking out a reference point that, at least for the next few years, will frame the discussion of how the United States, other governments, and companies building and using AI models will consider AI governance and regulatory tools.

Similar to GDPR, the AI Act’s scope is broad, covering all AI systems that are sold, offered, put into service or used within the EU. Providers or deployers of AI systems outside of the EU are captured by the AI Act if the results of their system are used in the EU. Companies based in the EU that provide AI systems are captured even if they do not deploy their systems in the EU. There are certain limited exceptions for personal and research use.

Key aspects of the AI Act include:

1. Broad Definition of Regulated AI Systems

The AI Act broadly defines a regulated AI System:

A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

2. A Tiered, Risk-Based Approach to Regulation

The AI Act categorizes AI systems based on four tiered risk levels: unacceptable, high, limited, and minimal or no risk. 

As risks associated with a particular category of system rises, stricter rules apply. 

Unacceptable AI Systems:  Certain models that violate fundamental EU rights are banned. 

Examples of prohibited AI systems include:

AI Systems That Deploy Subliminal and Manipulative Techniques

  • Systems that subtly influence behavior or decision-making fall into this category. Such techniques can be harmful and undermine individual autonomy.

AI Systems Capable of Monitoring and Profiling Natural Persons

  • These systems continuously track and analyze personal data, leading to prohibited profiling of race, political opinions, trade union membership, or other protected classifications.
  • The risk of privacy infringement and discrimination is high.

AI Systems That Target Vulnerable Groups

  • Any AI system designed to exploit or harm vulnerable populations (such as children, elderly, or marginalized communities) is considered unacceptable.

AI Systems that create a social score leading to detrimental treatment

  • These systems evaluate persons or groups based on social behavior leading to detrimental treatment of the person or group unrelated to the context in which the data was collected.

AI Systems Used in Real-Time Remote Biometric Identification

  • These systems can identify individuals based on biometric data (such as facial recognition) in real time.
  • The potential for misuse or violation of privacy rights makes them unacceptable.

High-Risk AI Systems: These systems have the potential to negatively affect safety or fundamental rights, such as those related to critical infrastructures (transport, utilities) or public health and safety. 

High-risk AI systems fall into two categories: (1) AI systems that are used in products falling under the EU’s product safety legislation, such as toys, aviation, cars, medical devices and lifts; and (2) AI systems in specific areas that will have to be registered in an EU database. These include education and vocational training, employment, law enforcement, immigration, and critical infrastructure. In general, this category of regulation:

  • Imposes Strict Obligations: High-risk AI systems must undergo conformity assessments before entering the EU market. 
  • Allows Limited Law Enforcement Exemptions: Limited use of real-time remote biometric identification systems is allowed for specific purposes, subject to strict safeguards. 

Transparency Risk AI Systems: These limited risk systems require transparency measures but are not subject to the same level of regulation as high-risk systems. For example, AI generated images, chatbots, and AI Systems in the workplace pose a transparency risk that must be disclosed to the user.

General-Purpose AI (GPAI): Additional requirements apply to GPAI, including generative AI used broadly. Based on the size of the training data, different risk levels apply.

3. Imposing Significant Compliance and Documentation Requirements For AI Risk Categories

For those AI systems authorized by the AI Act, the EU has set out a vast checklist of requirements that are daunting in their scope. There is some specificity as to these requirements, but the precise regulations – and boundaries – will only become clear gradually.

Providers of High-Risk AI Systems

Providers of high-risk AI systems must fulfill the following obligations to ensure compliance with the AI Act:

Pre-Market Assessment:

High-risk AI systems will be assessed before they are put on the market, and periodically thereafter. A provider may self-assess their AI system for compliance, but if the AI system uses biometrics, an assessment must be performed with the involvement of a notified body.  A declaration of conformity must be provided before deployment and the product must affix the “CE marking of conformity” to the AI system. Certain requirements are imposed on the underlying training data of high-risk AI systems, but there is an exception for using special categories of personal data for training out bias.

Quality Management System:

Providers must establish and maintain an effective, comprehensive quality management system. This system ensures that the development, deployment, and ongoing operation of high-risk AI systems adhere to predefined standards.

Post-Market Monitoring:

Implement post market monitoring systems allowing them to track the performance, safety, and impacts of AI systems after deployment.

Conduct regular assessments and updates to address any emerging risks or issues of concern.

Technical Documentation to Create Transparency

Providers must create and maintain detailed technical documentation for their AI systems. This documentation should cover aspects such as system architecture, algorithms, data sources, and risk assessments. 

The documentation serves as a critical reference point for regulatory authorities and ensures transparency.

Conformity Assessment Procedure

Before placing a high-risk AI system on the market, providers must subject it to a thorough conformity assessment.

This assessment evaluates the system's compliance with legal requirements, safety standards, and ethical considerations.

It includes evaluating training, validation and testing datasets to stringent quality requirements.

Users (Deployers) of High-Risk AI Systems

Companies deploying high-risk AI systems also have specific obligations to end¬ users, albeit less extensive than those of providers, to ensure that end-users (those interacting with the AI system) are aware that they are dealing with an AI system. Specifically, human oversight is essential for compliance. Furthermore, Deployers must inform impacted individuals if the Deployer is making a decision relating to such individuals based on AI.
This applies to both EU-based users and third-country users whose system output is used within the EU.

General-Purpose AI (GPAI) Models

The AI Act emphasizes documentation requirements on GPAI providers and classifies GPAI Models based on impact in the EU market.

Technical Documentation for GPAI Models:

All GPAI model providers, regardless of whether they offer open or closed models, must provide comprehensive and up to date technical documentation. This documentation includes details about the model's functionality, training data, and intended use.

Authorized Representative:

Prior to placing a GPAI model on the market, Providers of GPAI must appoint an authorized representative, regardless of where the Provider is located.

Compliance with Copyright Directive:

GPAI model providers must also comply with copyright regulations, ensuring their models do not infringe on intellectual property rights.

Model Evaluations and Incident Reporting:

GPAI models presenting a systemic risk must undergo regular model evaluations and constantly assess their system for security and systematic risk.

Whether a model presents a systemic risk is based on certain technical metrics, or as evaluated by the AI Office.

Providers must track and report any serious incidents related to their models.

4. Implementation Timeline

The AI Act outlines a phased approach to implementation, allowing organizations and stakeholders to adapt gradually. Here are the key timeframes:

Awaiting Formal Endorsement by the European Council:

After approval by the European Parliament, the AI Act awaits formal endorsement by the European Council.

Once endorsed, the Act becomes legally binding.

36-Month Transition Period:

Over the next 36 months, various provisions of the AI Act will come into force, starting with prohibitions on unacceptable risk AI models within 6 months. After 12 months, general-purpose AI models must begin to comply with certain provisions of the Act.

Continuous Review and Adaptation:

The AI Act encourages continuous review and adaptations as AI technology evolves. 

5. Fines for Noncompliance: Global Turnover-Based Fines:

Violations of the AI Act can result in fines as a percentage of a company's global sales revenue.

Depending on the severity of the offence and the company's size, fines range from 1.5% to 7%.

Companies deploying AI systems within the EU reach should:

  • Evaluate use of AI systems and consider how they might be classified under the AI Act’s risk hierarchy.
  • Consider the need to expand or develop AI governance programs within the company.
  • Leverage existing frameworks and principles to manage AI-related risks and compliance, with a view to how these may need to evolve quickly as new laws and technologies develop.