The European Commission has proposed delaying the full enforcement of its landmark Artificial Intelligence (AI) Act until December 2027, effectively pushing back critical regulations on high-risk AI systems by over a year. This move, unveiled as part of the “Digital Omnibus” – a broader effort to streamline EU digital laws – has ignited controversy, pitting industry lobbyists against consumer advocates and raising concerns about the pace of AI oversight in Europe.
Why the Delay?
The Commission attributes the postponement to slow implementation by member states and the need for businesses to adapt to the complex new rules. Specifically, provisions targeting “high-risk” AI technologies – those used in critical decision-making processes like loan applications, hiring, and educational assessments – will now not be fully enforced until late 2027.
This delay is significant because it means AI models will continue operating with fewer restrictions for an extended period. In practice, this means companies can still leverage previously restricted data to make consequential decisions about individuals’ access to financial services, healthcare, and employment opportunities.
Industry and Advocacy Responses
The tech industry has largely welcomed the delay, with groups like the CCIA (representing Amazon, Apple, Google, and Uber) calling for even more flexibility. They argue the current regulations are overly burdensome and hinder innovation. However, critics contend this is a blatant case of deregulation benefiting big tech at the expense of consumer protection.
Finance Watch’s Peter Norwood argues this is a “deregulate to accelerate” strategy that will harm consumers. He warns that individuals could face biased AI-driven denials for loans or discriminatory insurance premiums without transparency or recourse.
Consumer organizations like BEUC also criticize the move, stating that instead of simplifying rules, the Commission is essentially prioritizing industry interests over citizen rights.
Implementation Challenges and Political Hurdles
The delay is partly rooted in logistical realities: many EU member states missed the 2025 deadline to establish the national authorities required to enforce the AI Act. Without these structures, independent compliance assessors cannot be certified, and the system cannot function effectively.
The path forward is not guaranteed. Implementing the Omnibus will require amending the General Data Protection Regulation (GDPR), a move expected to face resistance from MEPs across the political spectrum. Some legislators argue that laws recently debated and adopted should not be changed so quickly, especially before full implementation.
The Bigger Picture
The EU’s AI Act was intended to set a global standard for responsible AI development. However, the delay raises questions about Europe’s commitment to enforcing those standards. The longer high-risk AI systems operate under looser regulations, the greater the potential for harm – whether through biased algorithms, privacy violations, or unfair economic outcomes.
The Commission’s decision underscores a broader tension between innovation and regulation in the AI era. Balancing these competing priorities will be a defining challenge for policymakers in the years to come.
The proposed delay is not merely a technical adjustment; it’s a strategic pause that reshapes the landscape of AI governance in Europe. The next few months will determine whether this pause leads to more effective oversight or further erosion of consumer protections.
