EU AI Act Under Scrutiny: Delays, Debates, and the Future of European AI Governance
6 mins read
11 Jun 2025
The European Union’s ambitious Artificial Intelligence Act (AI Act), a landmark piece of legislation aimed at establishing a comprehensive regulatory framework for AI, finds itself at a pivotal juncture in mid-2025. Since its initial proposal in April 2021, the Act has been a focal point of intense discussion, balancing the EU’s aspiration to set global AI governance standards with concerns from industry and international partners about innovation and practical implementation. As the staggered rollout of its provisions continues, recent reports in May and June 2025 suggest the European Commission might be considering a pause on the application of certain upcoming elements, signaling a period of reflection and potential recalibration.
The Journey So Far: A Staggered Path to Regulation
The AI Act’s journey began with the European Commission tabling its draft in April 2021, aiming to ensure AI systems are safe, transparent, and respect fundamental rights. The legislation received a mixed welcome. While many lauded its risk-based approach—categorizing AI systems based on their potential harm—others, including some EU Member States, voiced criticisms regarding regulatory overreach, burdensome compliance, and interpretive uncertainties.
After extensive negotiations, the AI Act officially entered into force in the summer of 2024 , adopting a staggered approach to its application:
– February 2, 2025: Provisions targeting “prohibited AI practices” became effective. These include AI systems deemed to pose an unacceptable risk, such as those used for social scoring by public authorities or manipulative techniques that exploit vulnerabilities. Guidelines on these prohibited practices were released by the European Commission on February 4, 2025, just two days after the rules took effect.
– August 2, 2025 (Upcoming): Rules concerning general-purpose AI models are scheduled to come into force, following consultations held by the European Commission in May 2025.
– August 2, 2026 (Intended): The main body of provisions, including the crucial requirements for “high-risk AI systems,” is slated for application. This category is of particular interest to businesses as it covers a broad array of typical activities, such as AI used in recruitment, job allocation, credit checking, and critical infrastructure management.
Whispers of a Pause: May-June 2025 Developments
It is these upcoming provisions, particularly for high-risk systems, that are now reportedly under discussion for a potential postponement. In May 2025, reports emerged suggesting the European Commission was contemplating a delay in the application and enforcement of certain yet-to-take-effect provisions of the AI Act.
This consideration is reportedly fueled by a Polish-led initiative, set to be discussed in an upcoming meeting of EU ministers in the Telecom formation of the Council of the EU. The proposals are said to include :
– A pause in the AI Act’s entry into force until necessary technical standards are developed.
– An expansion of exemptions for small and medium-sized enterprises (SMEs) under the high-risk AI regime.
– The introduction of waivers for low-complexity AI systems that would otherwise require third-party assessments.
– The creation of a cross-regulatory forum to ensure consistency across the EU’s burgeoning suite of digital regulations.
It’s crucial to note that, as of June 2025, any postponement remains a subject of discussion, and no formal plans have been officially tabled. Businesses are currently advised to continue designing and implementing compliance programs based on the AI Act’s published timetable.
Why the Hesitation? Unpacking the Pressures
Several factors are likely contributing to these discussions around a potential pause:
- Industry Pressure: Since 2021, the European Commission has faced sustained pressure from industry stakeholders. Concerns have consistently centered on the AI Act potentially stifling innovation due to stringent compliance requirements, a lack of certainty in their application, and high barriers to entry, especially for smaller organizations. These concerns have persisted despite provisions in the regulation aimed at assisting SMEs, such as reduced fees and simplified compliance measures. Critics also argue that supplementary materials, like the draft General-Purpose AI Code of Practice, might extend the law beyond its original intent, unfairly burdening organizations.
- International Dynamics: The AI Act’s ambition to be the “international gold standard for AI regulation” is facing evolving international opinion. Changes in administrations within EU Member States and major trading partners have contributed to this shift. The US government, for example, has voiced disapproval of the EU’s approach, citing potential impediments to industry innovation. US Vice President JD Vance, at the Paris AI Summit on February 11, 2025, cautioned against excessive AI regulation “killing a transformative industry”. This reflects a preference for free-market innovation over heightened regulation, a stance also supported by a proposed AI regulatory moratorium for US state and local entities currently under discussion in the US Congress. The US Mission to the EU has also provided feedback on the draft Code of Practice, suggesting streamlining and deletions, highlighting potential trade implications if disparities in regulatory approaches are not addressed.
- Operative Delays: The practical rollout of the AI Act has encountered significant timing challenges. Key guidance documents and technical standards, crucial for businesses to prepare for compliance, have been delayed.
- The General-Purpose AI Code of Practice, initially intended for release on May 2, 2025, has been delayed, with a final deadline of August 2, 2025.
- Many harmonized standards under development by CEN-CENELEC, which enable organizations to demonstrate compliance, were originally due in August 2025 but have been pushed back well into 2026. This leaves little time for implementation before the next wave of rules for high-risk systems is due to come into force.
- Even critical guidance on interpretation, such as that for prohibited AI practices, was released just two days after those provisions took effect, giving organizations minimal time to adapt. These delays fuel arguments that the AI Act is not yet fully fit for purpose until all its components are finalized and shared.
The Bigger Picture: EU’s Simplification Drive
These potential adjustments to the AI Act’s timeline occur within a broader context of the European Commission’s efforts to simplify complex legislation and boost the EU’s competitiveness. Through “Omnibus packages,” the EU is aiming to streamline regulations across various fronts, including the green agenda (CSRD, CSDDD), agricultural legislation, product regulation, digitalization (including GDPR simplification for SMEs), and even defense. The EU AI Office has also recently gathered public feedback on implementation challenges, which will inform a potential upcoming AI Act simplification exercise.
What This Means for Businesses on tecprotech.es
For businesses in Spain and across the EU, particularly those developing or deploying AI solutions, this period of discussion brings both uncertainty and a potential window for further preparation.
– Continue Compliance Efforts: Despite talk of a pause, the safest course of action is to continue aligning with the AI Act’s requirements as currently published. The core principles of risk management, transparency, and data governance are unlikely to disappear.
– Stay Informed: The regulatory landscape is fluid. Businesses should closely monitor official communications from the European Commission and the European AI Office for definitive updates.
– Leverage International Best Practices: Regardless of the AI Act’s final timeline, aligning with established international best practices, such as the NIST AI Risk Management Framework or ISO standards for AI, can provide a solid foundation for responsible AI development and deployment. These frameworks recommend similar controls for managing AI risks, which can help organizations pivot more quickly as regulations cohere.
– Risk Management is Key: Even if some AI Act provisions are delayed, the inherent risks associated with AI technologies remain. Proactive risk management, ethical considerations, and robust governance should remain urgent priorities for any organization using AI.
Looking Ahead
Whether a formal proposal to pause parts of the AI Act will materialize is currently unclear. However, the upcoming discussions within the Council of the EU may offer further indications. The EU is clearly engaged in a delicate balancing act: striving to foster trust and safety in AI through regulation, while also aiming to support innovation and maintain the bloc’s competitiveness in a rapidly evolving global technology race.
For businesses navigating this landscape, agility, a commitment to ethical AI, and a proactive approach to compliance and risk management will be essential for thriving in the digital decade.
To stay updated on AI regulation and its impact on businesses in Spain and the EU, follow tecprotech.es.