Product

Solutions

Resources

Customers

Company

Product

Solutions

Resources

Customers

Company

Published on: May 26, 2025

| Updated: May 26, 2025

The EU AI Act Explained

Artificial intelligence (AI) is reshaping industries, public services, and everyday consumer experiences at an unprecedented pace. As these technologies become more integrated into critical systems and decision-making processes, the need for clear and consistent regulation has intensified. In response, the European Union has adopted the Artificial Intelligence Act (EU AI Act)—the first comprehensive legal framework of its kind designed to govern the development and use of AI. 

In this article we’ll discuss what the EU AI Act is, why it matters, requirements, and the impact it will have on businesses.

What is the EU AI Act and Why Does It Matter?

The EU AI Act is the European Union’s landmark legislation created to regulate the safe and ethical use of artificial intelligence. As AI systems increasingly power everything from healthcare tools and financial services to recruitment software and government decision-making, the Act introduces a risk-based framework to ensure these technologies are trustworthy, transparent, and aligned with fundamental rights. 

Approved in 2024 and set for full enforcement by August 2, 2026, the EU AI Act will have far-reaching implications for organizations operating within or serving customers in the EU. At its core, the AI Act establishes rules for how AI systems can be developed, deployed, and monitored regardless of location. Businesses will be required to assess the risks of their AI systems, meet stringent transparency and accountability standards, and ensure ongoing compliance with regulatory obligations.

What is the AI Office and How Does It Fit into the EU AI Act?

Central to the effective enforcement of the EU AI Act is the creation of the AI Office—a new regulatory body embedded within the European Commission. The AI Office is tasked with supervising the harmonized application of the Act across all EU member states, acting as the main hub for coordinating AI governance at the union level.

Key responsibilities of the AI Office include:

  • Overseeing the implementation and consistent enforcement of the EU AI Act.

  • Providing technical expertise and guidance to both national authorities and companies striving for compliance.

  • Issuing clarifications, best practices, and FAQs as new questions arise in this rapidly-evolving field.

  • Supporting the identification and management of risks associated with high-risk AI systems.

  • Coordinating with the newly established AI Board—an advisory group comprising representatives from all EU countries—to help align interpretation and enforcement across jurisdictions.

In practice, the AI Office serves as the “nerve center” for AI regulation throughout the EU, ensuring that the rules set forth by the Act aren’t left to patchwork enforcement. By working closely with national supervisory authorities and the AI Board, the AI Office is designed to bridge the gap between technical innovation and regulatory oversight—promoting responsible AI adoption while safeguarding fundamental rights.

This structure enables businesses and public sector organizations alike to navigate the complex requirements of the AI Act with greater clarity and confidence, while keeping pace with advances in AI technology.

Who Needs to Pay Attention?

The EU AI Act applies broadly to any organization that develops, deploys, or sells AI systems within the European Union—or to EU citizens—regardless of where the company is headquartered. Its extraterritorial scope means global businesses using AI technologies must ensure compliance if they operate in or serve the EU market. 

Because the regulation impacts how AI is designed, trained, implemented, and monitored, compliance is not the responsibility of a single department, it requires coordinated action across the organization. 

How Are AI Safety Standards Set Under the EU AI Act?

To turn regulation into real-world practice, the EU AI Act relies on the establishment of technical standards for AI safety. These standards serve as detailed blueprints that organizations must follow to demonstrate compliance with the Act’s requirements—think of them as the “user manuals” for responsible AI development and deployment.

Standard setting in this context is a collaborative, multi-stage process:

  • Expert Involvement: Industry leaders, academic experts, policymakers, and civil society groups all contribute insights to make sure the standards are both effective and practical.

  • Reference to International Frameworks: Bodies such as the European Committee for Standardization (CEN), the European Telecommunications Standards Institute (ETSI), and even global organizations like ISO and IEC play a major role.

  • Drafting and Public Consultation: Proposed standards are often released in draft form, with open calls for feedback from stakeholders before being finalized.

As for timing, the development of these standards is well underway. With the Act set to be fully enforceable by August 2026, most core standards are expected to be published in the lead-up to this date. However, because technology evolves so quickly, standard setting will continue to be an ongoing process, adapting in response to new risks and advancements in AI.

Compliance Requirements Under the EU AI Act

The EU AI Act categorizes AI systems into four distinct risk levels: unacceptable, high, limited, and minimal. Each category comes with its own set of legal and regulatory obligations, determining the degree of scrutiny and compliance required for the system’s development and use. 

While most AI systems will fall into the minimal or limited risk categories, the European Union estimates that approximately 5–15% of existing AI applications will be classified as high-risk. These systems will face the most stringent compliance requirements under the Act, representing a relatively small share of the market but carrying significant regulatory burdens for the affected organizations.


Risk Level

Definition

Examples

Regulatory Status

Key Compliance Requirements

Unacceptable Risk

AI systems posing a clear threat to the safety, livelihoods, or rights of individuals

  • Social scoring by governments  

  • Cognitive behavioral manipulation 

  • Real-time biometric surveillance in public spaces (limited exceptions) 

Prohibited: Banned outright from use in the EU 

  • Cannot be developed, marketed or deployed in the EU 

  • Limited exceptions may apply for law enforcement under strict safeguards

High Risk

AI systems used in critical infrastructure or decision-making processes that significantly impact individuals lives or rights 

  • Biometric ID

  • AI used in hiring, education, and credit scoring 

  • Medical devices

  • AI in law enforcement or border control systems

Tightly Regulated: Permitted only if strict requirements have been met 

  • Conformity Assessments and CE marking 

  • Risk management and mitigation procedures  

  •  Robust data governance practices 

  • Detailed technical documentation 

  • Logging and traceability

  • Human oversight mechanisms

  • Transparency to users and authorities 

Limited Risk

AI systems that interact with users or could influence user behavior, but do not directly impact rights or safety

  • AI chatbots 

  • Deepfakes (non-malicious) 

  • Recommendation engines  

Transparency Obligations:  Allowed with specific communication requirements  

  • Systems must inform users they are interacting with AI  

  • For deepfakes, clear disclosure that content is artificially generated or manipulated

Minimal Risk

AI systems with low or no impact on user’s rights or safety

  • Spam filters  

  • AI in video games

  • Predictive text input tools  

Unregulated: No mandatory requirements under the Act  

  • No legal obligations

  • Voluntary codes of conduct encouraged 

  • Developers are encouraged to follow ethical guidelines for responsible AI development  


What Makes an AI System "High-Risk"?

According to the EU AI Act, an AI system is considered high-risk if it: 

  • Is used in sensitive or safety-critical sectors such as healthcare, education, employment, law enforcement, or infrastructure. 

  • Significantly influences individuals’ legal or material standing. 

  • Is listed in Annex III of the Act (which will be updated periodically based on technological and societal developments).

What are the Requirements for a “High-Risk” AI System?

Organizations developing or using high-risk AI systems must meet the following requirements to comply with the EU AI Act: 

  1. Risk Management System (Article 9): Implement a continuous, documented process to identify, evaluate, and mitigate risks throughout the AI system’s lifecycle. 
     

  2. Data and Data Governance (Article 10): Ensure training, validation, and testing datasets are relevant, representative, free of errors, and statistically appropriate to reduce bias and ensure fairness. 
     

  3. Technical Documentation (Article 11): Maintain detailed technical documentation that demonstrates compliance with the Act and enables authorities to assess the system’s conformity. 
     

  4. Record-Keeping and Logging (Article 12): Design systems to automatically record events (logging) to ensure auditability, traceability, and incident investigation. 
     

  5. Transparency and Information to Users (Article 13): Provide clear instructions and documentation to users about how the AI system functions, its limitations, and how to use it safely and effectively. 
     

  6. Human Oversight (Article 14): Integrate mechanisms that ensure meaningful human involvement. The AI system must not override or mislead human decision-makers. 
     

  7. Accuracy, Robustness, and Cybersecurity (Article 15): Design systems to deliver accurate, reliable, and secure performance. They must withstand manipulation and continue functioning under normal and foreseeable conditions. 
     

  8. CE Marking and EU Declaration of Conformity (Article 29): Apply the CE marking to indicate conformity and submit a formal declaration that the AI system meets EU regulatory standards. 
     

  9. Conformity Assessment (Article 43): Conduct internal checks or third-party audits (depending on the system type) to certify that the AI system meets all legal requirements before it is placed on the market or put into service. 
     

  10. Post-Market Monitoring (Article 72): Implement processes to monitor performance and risks after deployment and ensure ongoing compliance throughout the system's operational life. 
     

  11. Incident Reporting (Article 73): Establish protocols for notifying national authorities of serious incidents or system malfunctions that pose risks to health, safety, or fundamental rights.

Who Qualifies as Providers of General-Purpose AI Models?

In April 2025, the AI Office released preliminary guidelines to clarify exactly who falls under the category of “providers of general-purpose AI (GPAI) models.” According to these early recommendations, any organization—whether a tech startup in Berlin or a multinational like Google, Microsoft, or OpenAI—that develops, trains, or makes available broad, foundational AI models used for a wide array of applications (like large language models or foundation models) may be considered a provider. This includes companies building models intended for multiple downstream uses, even if the end applications are created or customized by others.

The guidelines outline several factors for qualification, such as:

  • Responsibility for designing or training an AI model with general capabilities, rather than one narrowly tailored for a specific purpose.

  • Involvement in the commercialization, open-sourcing, or deployment of these models to other businesses, developers, or the public.

  • The capacity to influence or update the original model’s design, training data, or performance.

This means that not only the big names in AI, but also research institutions and open-source communities sharing their general-purpose models, may need to prepare for compliance under the Act. The AI Office’s final guidance is expected to further clarify edge cases like collaborative research or open-source development, but the preliminary direction is clear: if you build or provide AI models with wide-ranging uses, the rules likely apply.

What are Other Compliance Requirements for the EU AI Act?

To meet the obligations set forth under the EU AI Act organizations must implement a range of technical and procedural controls. These requirements are designed to ensure AI systems are safe, transparent, and aligned with fundamental rights throughout their lifecycle. 

  • Risk management processes must extend their operations from AI development through its entire lifecycle, along with software updates and re-deployment stages.

  • Data governance practices through which organizations validate that training data contains accurate, unbiased information, which also maintains its relevance. Users need to get alerts when their activity involves AI systems, according to transparency measures. 

  • Systems should contain human intervention mechanisms that permit meaningful contact and manual decision takeovers from automated systems. 

  • Comprehensive logging and traceability, enabling audits and accountability. 

  • The CE marking demonstrates EU compliance through its status as a conformity assessment symbol.  Achieving these standards will require a cross-functional effort—legal, compliance, product, and technical teams must align culturally and operationally to integrate these safeguards into their workflows.

The General-Purpose AI Code of Practice: A Key Pillar of Enforcement

Beyond risk classifications, the EU AI Act also addresses a unique challenge posed by so-called "general-purpose" AI systems, think of versatile models from major tech players like OpenAI’s GPT-4, Google’s Gemini, and Meta’s Llama, which can be adapted for a wide range of downstream uses. Because these foundational models power everything from chatbots to content creators and scientific discovery tools, regulating their use requires a coordinated, practical approach.

Enter the Code of Practice for general-purpose AI. This Code acts as a guiding framework for developers and deployers of advanced AI models. Rather than prescribing one-size-fits-all technical specifications, it establishes best practices for transparency, data governance, risk assessment, and user protection effectively setting the industry’s baseline for responsible development and deployment.

Crucially, adherence to the Code of Practice is a core part of how the EU will enforce the AI Act among providers of general-purpose models. It translates the Act’s overarching goals into actionable day-to-day standards, helping organizations steer clear of compliance pitfalls before the ink is dry on more granular legislation. By following the Code, major players can proactively demonstrate their commitment to safety and ethics, all while regulators gain a clear benchmark for evaluating AI systems across diverse applications.

In short, the Code of Practice is not just a box-ticking exercise. It’s the linchpin that connects the high-level principles of the EU AI Act with the realities of modern AI development setting the standard for how general-purpose AI models should be built, tested, and brought to market.

How Can Businesses Prepare for the EU AI Act?

The preparation for the EU AI Act constitutes a strategic, multifaceted journey instead of a singular project. Below are the steps organizations can take in preparation of the requirements:

Step 1: Audit Your AI Ecosystem 

Begin by identifying all AI and automated systems in use across your organization both internal (e.g., employee monitoring, decision-making tools) and external (e.g., customer-facing systems). This inventory forms the baseline for evaluating compliance gaps and areas for improvement. 

Step 2: Classify Risk Levels 

Apply the EU AI Act’s framework to categorize systems as high, limited, or minimal risk. Risk classification guides your compliance priorities. 

Step 3: Identify Compliance Gaps 

Review data practices, model training, and user interactions. Document non-compliant areas for policy or technical updates in alignment with the EU AI Act.  

Step 4: Strengthen Internal Governance 

Assign clear responsibilities across legal, IT, data science, and other teams that work directly with AI functions. This shared responsibility helps foster collaboration across the organization. 
 
Key stakeholders who must be engaged include: 

  • Compliance, Risk, and GRC Teams: Must interpret legal requirements and oversee reporting, documentation, and internal audits. 

  • AI Developers and Product Managers: Responsible for integrating compliance requirements into AI system architecture, functionality, and lifecycle. 

  • Legal and Policy Teams: Need to stay up to date on evolving EU guidelines and ensure the organization’s AI systems meet jurisdictional requirements. 

  • Chief Technology Officers (CTOs): Must oversee system-level compliance, including secure data integration and transparency protocols. 

  • Executive Leadership: Ultimately accountable for funding compliance initiatives and establishing organization-wide responsibility for risk and ethics. As the enforcement timeline approaches, regulators are expected to increase scrutiny, making early preparation and cross-functional collaboration essential. 

Step 5: Leverage Technology 

Using the right technology can make AI compliance much easier. Tools that automate documentation, audits, and updates help reduce manual work and avoid errors. GRC tools, like StandardFusion, are especially valuable as they bring everything together in one place and help teams stay on top of AI-related risks and requirements. 

Step 6: Educate and Train Teams 

Educate developers, legal teams, and leadership on AI obligations. Ongoing training at regular intervals builds a compliance-first culture and reduces future risk.

Overview of National Implementation Plans

As the EU AI Act moves from legislation to reality, each EU member state is preparing its own roadmap to bring these new rules to life on a national level. Implementation isn’t a one-size-fits-all affair. Governments are actively appointing designated authorities to oversee AI regulation, tailoring action plans that fit their country’s regulatory landscape, and collaborating with established organizations such as data protection agencies and standards bodies like ISO and IEEE.

What’s emerging is a coordinated patchwork:

  • Designated National Authorities: Member states are in the process of assigning or creating regulatory bodies responsible for enforcing the Act. In some cases, these will be existing digital or data protection authorities; in others, entirely new offices are being established.

  • Implementation Timelines: Countries are setting out step-by-step plans to ensure readiness for the August 2026 deadline, from drafting national guidance to launching consultation periods with industry stakeholders.

  • Collaboration and Guidance: Public-private partnerships are a key feature, with governments engaging with local businesses, research institutions, and international groups to align national strategies and harmonize approaches.

The result will be a network of national bodies sharing best practices and ensuring a consistent, EU-wide application of the Act, while leaving room for tailored responses to the unique challenges and capacities of each country.

Key Dates and Next Steps for the AI Act

With the EU AI Act officially adopted, businesses and organizations are now on the clock to prepare for compliance. Here’s a snapshot of the timeline and what to expect as the regulatory framework rolls out:

  • Entry into Force: The AI Act formally entered into force in 2024, starting the countdown toward full application.

  • Staged Application Periods: Not all requirements kick in at once. Certain provisions, such as the prohibition of specific “unacceptable risk” AI practices begin to apply just months after the entry into force, while most other obligations follow a two-year implementation window, culminating in full enforcement by August 2, 2026.

  • Transitional Measures: Organizations are encouraged to use this period to assess their AI systems, adapt development pipelines, and implement robust risk management processes.

Expect further clarification in the coming months. The European Commission is set to release detailed guidance, technical standards, and secondary legislation (sometimes called “implementing acts”) to clarify expectations and streamline compliance. Industry groups like the Confederation of European Business (BusinessEurope) and tech consultancies such as Accenture and Deloitte are already offering roadmaps to help organizations align internal practices with the Act’s requirements.

In summary, now is the time for businesses to audit their use of AI, set up cross-functional teams (involving legal, compliance, tech, and product stakeholders), and stay tuned for official guidance that will shape day-to-day compliance.

Final Thoughts

The EU AI Act marks a major shift in how technology is governed, placing equal weight on innovation, safety, ethics, and transparency. It offers businesses a structured path to develop trustworthy and responsible AI systems without stifling progress. Forward-thinking organizations can get ahead by auditing their current AI systems, assessing potential risks, and building compliance processes that align with the Act.