EU AI Act Compliance Checklist: Everything Enterprises Need to Know Before 2027
AI governance has moved from a boardroom discussion to a legal obligation. The EU AI Act is the most comprehensive artificial intelligence policy framework enacted anywhere in the world and enforcement is already underway. For enterprises deploying AI systems in or affecting the European Union, full compliance is required by 2027. The penalties for non-compliance reach up to €35 million or 7% of global annual revenue, whichever is higher.
This guide breaks down what the EU AI Act requires, how it compares to US AI policy and what your organization needs to do right now.
Understanding the EU AI Act and Its Timeline
The EU AI Act takes a risk-based approach to regulating artificial intelligence. Rather than applying uniform rules across all AI applications, it classifies systems into four risk tiers and applies proportionate obligations to each.
The enforcement timeline is already in motion. Provisions banning unacceptable-risk AI systems took effect in early 2025. Obligations for high-risk AI systems and general-purpose AI models are phasing in through 2025 and 2026. Full compliance across all categories is required by August 2027.
Risk Classification: Where Does Your AI System Fall?

Understanding your AI system's risk classification is the starting point for any EU AI Act compliance program.
Unacceptable Risk (Prohibited)
These systems are banned outright. The category includes social scoring by public authorities, real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions) and AI that manipulates individuals through subliminal techniques. If your organization operates any system in this category, immediate action is required.
High Risk
This is the most regulated category and the one most relevant to enterprise AI teams. High-risk AI systems include those used in employment decisions (CV screening, performance monitoring), credit scoring, healthcare diagnostics, critical infrastructure management, educational assessment and law enforcement tools. If your AI system influences decisions that significantly affect individuals' rights or safety, it is almost certainly high-risk.
Limited Risk
Chatbots and deepfake-generating tools fall into this tier. The primary obligation is transparency. Users must be informed they are interacting with an AI system.
Minimal Risk
Spam filters, AI in video games and similar applications carry no specific obligations under the Act, though general data protection rules still apply.
A practical test: Ask whether your AI system makes or influences consequential decisions about people in regulated domains. If the answer is yes, assume high-risk classification and scope your compliance program accordingly.
EU AI Act Requirements for High-Risk Systems
High-risk AI systems face the most detailed set of EU AI Act requirements. Compliance is not a one-time exercise. It requires ongoing documentation, monitoring and governance infrastructure.
Conformity Assessment
Before deploying a high-risk AI system, organizations must conduct a conformity assessment demonstrating the system meets the Act's technical and governance standards. For many system types, self-assessment is permitted. Some categories require third-party audit.
Technical Documentation
A complete technical file must be maintained covering system design, training data sources, development methodology, performance metrics, known limitations and intended use cases. This documentation must be kept current and made available to regulators on request.
Data Governance
Training data must meet quality standards. Bias testing is mandatory. Organizations must demonstrate that datasets are representative and that data management practices reduce discriminatory outcomes.
Human Oversight
High-risk systems must be designed to allow human intervention. Automated decisions cannot be fully insulated from human review. Oversight mechanisms must be built into the system and documented.
Post-Market Monitoring
Compliance does not end at deployment. Organizations must monitor system performance in production, log incidents and report serious malfunctions to relevant authorities.
General-Purpose AI Models
Foundation models used in EU-facing products carry their own obligations, including training data transparency, safety evaluations and energy consumption reporting for the largest models.
How the EU AI Act Compares to US AI Policy
Global enterprises face the challenge of operating across two regulatory environments with different approaches to artificial intelligence policy.
The United States does not yet have comprehensive federal AI legislation equivalent to the EU AI Act. The US approach relies on executive orders, sector-specific regulation and voluntary frameworks. The NIST AI Risk Management Framework provides a structured methodology for identifying and managing AI risk, but adoption is voluntary for most industries. Sector regulators like the FDA for healthcare AI and the SEC for financial AI apply their existing authority to AI systems within their domains.
| Dimension | EU AI Act | US Approach |
|---|---|---|
| Scope | Comprehensive, cross-sector | Fragmented, sector-specific |
| Legal basis | Binding regulation | Executive orders + voluntary frameworks |
| Risk classification | Mandatory, tiered | Sector-dependent |
| Penalties | Up to €35M or 7% revenue | Varies by sector regulator |
| Timeline | Phased enforcement through 2027 | No unified deadline |
| Foundation model rules | Yes, explicit obligations | Limited, still developing |
For organizations operating in both markets, the practical approach is to build compliance against the EU AI Act's requirements as a baseline. Meeting the EU standard typically satisfies or substantially advances US compliance obligations, particularly for organizations using the NIST AI risk management framework as a reference.
The EU AI Act Compliance Checklist
Use this AI compliance checklist to assess your organization's current readiness and identify gaps.
1. Build a Model Registry
Document every AI system your organization deploys or procures. Include system name, vendor (if third-party), use case, deployment geography and data inputs. Without a complete inventory, risk classification is impossible.
2. Classify Each System by Risk Tier
Apply the EU AI Act's classification criteria to every system in your registry. Flag all probable high-risk systems for deeper assessment.
3. Conduct Risk Assessments for High-Risk Systems
For each high-risk system, complete a formal AI risk assessment framework review covering intended use, foreseeable misuse, impact on fundamental rights and technical robustness.
4. Establish Bias and Fairness Testing Protocols
Define testing methodology, acceptable thresholds and remediation procedures. Testing must occur before deployment and at defined intervals during operation.
5. Document Data Governance Practices
Capture data sourcing, labeling methodology, quality controls and retention policies. This documentation forms part of the required technical file.
6. Define Human Oversight Mechanisms
Identify who has authority to override or pause AI-driven decisions. Build escalation paths into operational processes and document them.
7. Create Transparency Notices
Where the Act requires disclosure (chatbots, AI-generated content, high-risk decisions), draft user-facing notices and integrate them into relevant interfaces.
8. Assess Third-Party AI Vendors
If you deploy AI systems built by third parties, assess whether those vendors have met their obligations under the Act. Contractual responsibility for compliance does not automatically transfer to vendors.
9. Establish Incident Reporting Procedures
Define what constitutes a reportable incident, who is responsible for reporting and how reports are filed with relevant national authorities.
10. Schedule Post-Deployment Monitoring Reviews
Set a cadence for reviewing production system performance against documented benchmarks. Assign ownership within your governance function.
Building a Responsible AI Governance Function
A checklist creates a starting point. Sustained EU AI Act compliance requires an operational governance function.
Assign clear ownership. In most enterprises, AI governance sits across legal, engineering and compliance teams with unclear accountability. Designate a lead responsible for maintaining the model registry, coordinating risk assessments and tracking regulatory developments. Larger organizations are building dedicated AI governance teams.
Implement a governance maturity model to assess and improve your program over time. At the foundational level, organizations have ad hoc AI use with no formal documentation. A developing program has basic inventory and some risk classification in place. A defined program has documented processes, assigned ownership and regular review cycles. An optimized program integrates AI governance into product development from the start and uses continuous monitoring to surface issues proactively.
Most enterprises are currently at the foundational or developing stage. The 2027 deadline requires reaching at least the defined level across all high-risk systems.
Looking ahead, three trends will shape AI governance policy through 2027 and beyond. Regulators will increase enforcement activity as the 2027 deadline approaches, with early enforcement actions likely targeting high-profile industries like financial services and healthcare. The US will move closer to sector-specific binding requirements even without comprehensive federal legislation. AI governance will increasingly be treated as a procurement requirement, with enterprises demanding compliance documentation from AI vendors before deployment.
Organizations that build governance infrastructure now will find ongoing compliance significantly easier. Those that delay will face compressed timelines, higher remediation costs and greater regulatory risk.
12th Wonder works with enterprise teams navigating AI governance policy, responsible AI framework design and compliance readiness across regulated industries. If you are building or scaling an AI governance function, our team can help you move from checklist to operational program.
EU AI Act Compliance Checklist Before 2027
A practical guide to AI governance and compliance readiness under the EU AI Act.
