The EU AI Act is Here: What Developers and Companies Need to Know
Published: December 17, 2025 | Reading Time: 8 minutes
TL;DR: The EU AI Act is the world’s first comprehensive AI regulation. Phase 1 obligations
(including for general-purpose AI) are now in force. Here’s what you need to know about compliance, risk
classifications, and penalties.
What is the EU AI Act?
The EU AI Act is the European Union’s comprehensive framework for regulating artificial intelligence. It’s the
world’s first major AI-specific legislation, and it affects:
- Companies building AI systems
- Companies deploying AI systems
- Any AI product or service used by EU citizens
If you’re reading this from anywhere in the world and your AI touches EU users, this law applies to you.
Timeline
| Date | Milestone |
|---|---|
| August 2024 | AI Act entered into force |
| February 2025 | Prohibited AI practices take effect |
| August 2025 | General-purpose AI obligations begin |
| August 2026 | Full enforcement for most provisions |
We’re now in the phase where general-purpose AI model obligations are active.
The Risk Classification System
The EU AI Act classifies AI systems into four risk categories:
1. Unacceptable Risk (Banned)
These AI applications are prohibited entirely:
- Social scoring systems (like China’s social credit)
- Real-time biometric identification in public spaces (with exceptions)
- Manipulation techniques that exploit vulnerabilities
- Emotion recognition in workplaces and schools
2. High Risk (Heavily Regulated)
AI systems in these areas face strict requirements:
- Critical infrastructure (energy, transport, water)
- Education (scoring, admission decisions)
- Employment (hiring, evaluation, promotion)
- Essential services (credit scoring, insurance)
- Law enforcement and immigration
- Justice and democratic processes
High-risk systems must:
- Implement risk management systems
- Maintain technical documentation
- Enable human oversight
- Ensure accuracy, robustness, and cybersecurity
- Register in an EU database
3. Limited Risk (Transparency Required)
Systems like chatbots must clearly disclose they’re AI-powered. Users must know they’re interacting with AI.
4. Minimal Risk (No Special Requirements)
Most AI applications (spam filters, video games, inventory management) face no additional regulation beyond
existing laws.
General-Purpose AI (GPAI) Rules
The December 2025 enforcement specifically targets general-purpose AI models—like GPT, Claude, and Gemini. These
now require:
All GPAI Models
- Technical documentation
- Information for downstream providers
- Copyright compliance and transparency
- Published summary of training data
High-Impact GPAI Models (10^25 FLOPs+)
- Model evaluations for systemic risk
- Risk assessment and mitigation
- Incident monitoring and reporting
- Cybersecurity protections
This applies to the largest AI models from OpenAI, Google, Anthropic, and Meta.
Penalties
The EU AI Act has significant fines:
| Violation | Maximum Fine |
|---|---|
| Prohibited AI practices | €35 million or 7% global revenue |
| High-risk non-compliance | €15 million or 3% global revenue |
| Incorrect information to authorities | €7.5 million or 1% global revenue |
For a company like OpenAI (multi-billion revenue), these fines could be massive.
What This Means for Developers
If You’re Building AI
- Document your training data and model capabilities
- Implement risk assessments for high-risk applications
- Ensure transparency about AI usage to end users
- Build in human oversight mechanisms where required
If You’re Deploying AI
- Understand what risk category your use case falls into
- Ensure your AI provider is compliant
- Maintain records of AI system usage
- Train staff on AI compliance requirements
If You’re Using Third-Party AI
- Request compliance documentation from providers
- Verify the AI meets your use case requirements
- Don’t use AI for prohibited purposes
Global Impact
Even if you’re not in the EU, this matters:
The Brussels Effect
Companies often adopt the strictest global standard to simplify compliance. EU rules may become de facto global
standards, just as GDPR did for privacy.
US Response
The US has been working on its own AI policy proposals, though they’re less comprehensive. The EU’s approach may
influence American regulation.
Competitive Dynamics
Some argue strict regulation disadvantages EU companies. Others argue it creates a competitive advantage through
trust and safety.
How to Prepare
- Audit your AI systems: Identify what AI you’re using and what risk category it falls into
- Document everything: Training data, model capabilities, intended uses
- Implement transparency: Disclose AI usage to users where required
- Build oversight: Ensure humans can intervene in high-risk decisions
- Stay updated: The Act is still being interpreted; guidance is evolving
The Bottom Line
The EU AI Act is the most significant AI regulation in the world. It affects every major AI company and many AI
use cases.
For responsible developers and companies, much of this is good practice anyway—documentation, transparency,
oversight, and risk management.
For everyone else, it’s time to get compliant. The fines are too large to ignore.