In a groundbreaking development, European Union (EU) officials have reached a provisional agreement on the world’s first comprehensive laws to regulate the use of Artificial Intelligence (AI). After an extensive 36 hours of talks, negotiators have agreed upon rules governing AI applications, including systems like ChatGPT and facial recognition. The proposed legislation, known as the AI Act, will undergo a vote in the European Parliament early next year, with enforcement expected no earlier than 2025.
Key Provisions of the AI Act
Definition and Scope
The AI Act defines AI as software capable of generating outputs such as content, predictions, recommendations, or decisions based on human-defined objectives. Importantly, the agreement ensures clarity in distinguishing AI from simpler software systems, aligning the definition with the OECD’s approach.
Classification of AI Systems
The agreement introduces a risk-based approach, categorizing AI systems based on their potential harm to society. High-risk AI models and systems will face stricter rules, emphasizing transparency obligations and fundamental rights impact assessments before deployment.
Prohibited AI Practices
The EU unequivocally prohibits certain AI practices. Notably, these include cognitive behavioral manipulation, untargeted scraping of facial images, emotion recognition in workplaces and educational institutions, social scoring, biometric categorization to infer sensitive data, and specific cases of predictive policing for individuals.
Law Enforcement Exceptions by EU
Acknowledging the distinctive requirements of law enforcement, the agreement permits the deployment of high-risk AI tools in pressing situations. Additionally, real-time remote biometric identification in public spaces is authorized, subject to stringent safeguards. This authorization is specifically limited to situations that involve victims of crimes. The prevention of threats such as terrorist attacks, and searches for individuals suspected of committing serious crimes.
EU Governance Architecture
To supervise advanced AI models, the establishment of an AI Office within the European Commission is mandated. This office will collaborate closely with a scientific panel of independent experts and an AI Board consisting of representatives from member states. Furthermore, an advisory forum for stakeholders, encompassing industry, SMEs, start-ups, civil society, and academia, will contribute technical expertise to support the functions of the AI Board.
Penalties Imposed by EU
The AI Act introduces fines for violations. The amount calculated as a percentage of the offending company’s global annual turnover or a predetermined amount, whichever is higher. Proportionate caps on administrative fines for SMEs and start-ups are included to avoid excessive burdens.
EU: Transparency and Fundamental Rights
A fundamental rights impact assessment is necessary before deploying high-risk AI systems. Transparency is quite important, with public entities obliged to register in the EU database for high-risk AI systems. Additional provisions address informing individuals when exposed to emotion recognition systems.
Measures in Support of Innovation
To foster an innovation-friendly environment, the AI Act includes provisions for AI regulatory sandboxes, allowing the development, testing, and validation of innovative AI systems in real-world conditions. Specific actions and derogations support smaller companies and reduce administrative burdens.
Entry into Force
The AI Act is set to apply two years after its entry into force, with exceptions for specific provisions.