The EU Artificial Intelligence (Al) Act: A Pioneering Framework for AI Governance
Apr 15 2024
An audio summary of this article is available in the player below. Scroll to keep reading.
Listen and subscribe to Womble Perspectives wherever you get your podcasts.
On March 13th, 2024, the European Parliament approved the EU Artificial Intelligence (AI) Act, marking a significant milestone in AI governance. This comprehensive regulation aims to ensure responsible AI development, protect fundamental rights, and foster innovation, allowing stakeholders time to align with its requirements while promoting responsible AI development within the European Union.
Of course, this aspiration is generally true of AI governance initiatives being rolled out in both the public and private sectors. The words are familiar. The values are routinely espoused by a wide range of stakeholders.
But the AI Act is truly a pivotal milestone. In adopting the AI Act the EU has again taken a leadership role in technology regulation - staking out a reference point that, at least for the next few years, will frame the discussion of how the United States, other governments, and companies building and using AI models will consider AI governance and regulatory tools.
In adopting the AI Act the EU has again taken a leadership role in technology regulation - staking out a reference point that, at least for the next few years, will frame the discussion of how the United States, other governments, and companies building and using AI models will consider AI governance and regulatory tools.
Similar to GDPR, the AI Act’s scope is broad, covering all AI systems that are sold, offered, put into service or used within the EU. Providers or deployers of AI systems outside of the EU are captured by the AI Act if the results of their system are used in the EU. Companies based in the EU that provide AI systems are captured even if they do not deploy their systems in the EU. There are certain limited exceptions for personal and research use.
Key aspects of the AI Act include:
The AI Act broadly defines a regulated AI System:
A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
The AI Act categorizes AI systems based on four tiered risk levels: unacceptable, high, limited, and minimal or no risk.
As risks associated with a particular category of system rises, stricter rules apply.
Unacceptable AI Systems: Certain models that violate fundamental EU rights are banned.
Examples of prohibited AI systems include:
AI Systems That Deploy Subliminal and Manipulative Techniques
AI Systems Capable of Monitoring and Profiling Natural Persons
AI Systems That Target Vulnerable Groups
AI Systems that create a social score leading to detrimental treatment
AI Systems Used in Real-Time Remote Biometric Identification
High-Risk AI Systems: These systems have the potential to negatively affect safety or fundamental rights, such as those related to critical infrastructures (transport, utilities) or public health and safety.
High-risk AI systems fall into two categories: (1) AI systems that are used in products falling under the EU’s product safety legislation, such as toys, aviation, cars, medical devices and lifts; and (2) AI systems in specific areas that will have to be registered in an EU database. These include education and vocational training, employment, law enforcement, immigration, and critical infrastructure. In general, this category of regulation:
Transparency Risk AI Systems: These limited risk systems require transparency measures but are not subject to the same level of regulation as high-risk systems. For example, AI generated images, chatbots, and AI Systems in the workplace pose a transparency risk that must be disclosed to the user.
General-Purpose AI (GPAI): Additional requirements apply to GPAI, including generative AI used broadly. Based on the size of the training data, different risk levels apply.
For those AI systems authorized by the AI Act, the EU has set out a vast checklist of requirements that are daunting in their scope. There is some specificity as to these requirements, but the precise regulations – and boundaries – will only become clear gradually.
Providers of High-Risk AI Systems
Providers of high-risk AI systems must fulfill the following obligations to ensure compliance with the AI Act:
Pre-Market Assessment:
High-risk AI systems will be assessed before they are put on the market, and periodically thereafter. A provider may self-assess their AI system for compliance, but if the AI system uses biometrics, an assessment must be performed with the involvement of a notified body. A declaration of conformity must be provided before deployment and the product must affix the “CE marking of conformity” to the AI system. Certain requirements are imposed on the underlying training data of high-risk AI systems, but there is an exception for using special categories of personal data for training out bias.
Quality Management System:
Providers must establish and maintain an effective, comprehensive quality management system. This system ensures that the development, deployment, and ongoing operation of high-risk AI systems adhere to predefined standards.
Post-Market Monitoring:
Implement post market monitoring systems allowing them to track the performance, safety, and impacts of AI systems after deployment.
Conduct regular assessments and updates to address any emerging risks or issues of concern.
Technical Documentation to Create Transparency
Providers must create and maintain detailed technical documentation for their AI systems. This documentation should cover aspects such as system architecture, algorithms, data sources, and risk assessments.
The documentation serves as a critical reference point for regulatory authorities and ensures transparency.
Conformity Assessment Procedure
Before placing a high-risk AI system on the market, providers must subject it to a thorough conformity assessment.
This assessment evaluates the system's compliance with legal requirements, safety standards, and ethical considerations.
It includes evaluating training, validation and testing datasets to stringent quality requirements.
Users (Deployers) of High-Risk AI Systems
Companies deploying high-risk AI systems also have specific obligations to end¬ users, albeit less extensive than those of providers, to ensure that end-users (those interacting with the AI system) are aware that they are dealing with an AI system. Specifically, human oversight is essential for compliance. Furthermore, Deployers must inform impacted individuals if the Deployer is making a decision relating to such individuals based on AI.
This applies to both EU-based users and third-country users whose system output is used within the EU.
General-Purpose AI (GPAI) Models
The AI Act emphasizes documentation requirements on GPAI providers and classifies GPAI Models based on impact in the EU market.
Technical Documentation for GPAI Models:
All GPAI model providers, regardless of whether they offer open or closed models, must provide comprehensive and up to date technical documentation. This documentation includes details about the model's functionality, training data, and intended use.
Authorized Representative:
Prior to placing a GPAI model on the market, Providers of GPAI must appoint an authorized representative, regardless of where the Provider is located.
Compliance with Copyright Directive:
GPAI model providers must also comply with copyright regulations, ensuring their models do not infringe on intellectual property rights.
Model Evaluations and Incident Reporting:
GPAI models presenting a systemic risk must undergo regular model evaluations and constantly assess their system for security and systematic risk.
Whether a model presents a systemic risk is based on certain technical metrics, or as evaluated by the AI Office.
Providers must track and report any serious incidents related to their models.
The AI Act outlines a phased approach to implementation, allowing organizations and stakeholders to adapt gradually. Here are the key timeframes:
Awaiting Formal Endorsement by the European Council:
After approval by the European Parliament, the AI Act awaits formal endorsement by the European Council.
Once endorsed, the Act becomes legally binding.
36-Month Transition Period:
Over the next 36 months, various provisions of the AI Act will come into force, starting with prohibitions on unacceptable risk AI models within 6 months. After 12 months, general-purpose AI models must begin to comply with certain provisions of the Act.
Continuous Review and Adaptation:
The AI Act encourages continuous review and adaptations as AI technology evolves.
Violations of the AI Act can result in fines as a percentage of a company's global sales revenue.
Depending on the severity of the offence and the company's size, fines range from 1.5% to 7%.
Companies deploying AI systems within the EU reach should: