TLDRs;
Contents
- Over 110 companies, including ASML and SAP, want the EU to delay AI Act enforcement due to unclear guidelines.
- Firms warn the lack of finalized technical standards could hurt innovation and global competitiveness.
- The AI Act may become a global benchmark, but its rollout risks repeating the GDPR’s early compliance pitfalls.
- Global fragmentation in AI rules forces companies to adopt costly, region-specific strategies
A growing wave of concern is sweeping across Europe’s tech industry as over 110 companies, including major names like ASML Holding NV, SAP SE, and Mistral AI, have urged the European Union to postpone the enforcement of its flagship AI regulation.
The call comes in the form of a joint letter addressed to European Commission President Ursula von der Leyen, warning that the existing timeline for the AI Act could stifle innovation and put European firms at a global disadvantage.
The AI Act, which passed in 2023, aims to regulate artificial intelligence by introducing mandatory requirements around transparency, risk control, and data governance. While the legislation is intended to establish a global gold standard for responsible AI development, industry leaders argue that critical details remain unclear, and the lack of finalized implementation guidelines is creating a climate of uncertainty.
Leading Firms Raise the Alarm Over Unclear Guidelines
Companies across sectors have flagged the absence of a finalized code of practice as a major obstacle. Airbus SE, Mercedes-Benz Group AG, and Deutsche Lufthansa joined the chorus of concerned voices, saying that while they support AI regulation in principle, they lack the technical clarity needed to comply with the Act’s high-stakes provisions.
At the heart of the issue are the rules governing general-purpose AI models and so-called “high-risk” systems. These classifications come with stringent requirements that could expose non-compliant firms to fines of up to 7 percent of their global turnover. With enforcement deadlines approaching and guidance still in flux, companies argue they are being forced to prepare for regulations they cannot yet fully interpret.
A Familiar Compliance Challenge
The pushback mirrors early resistance to the EU’s General Data Protection Regulation (GDPR), when many firms were caught off-guard by the lack of practical compliance pathways.
Much like GDPR, the AI Act is positioned to shape global standards, with countries in Southeast Asia and beyond already taking cues from the EU’s framework.
However, the same ambitions that make the Act globally influential could also lead to unintended consequences. Critics warn that overly rigid rules may encourage AI companies to move their operations to regions with more flexible regulatory environments, undercutting the EU’s goals of technological sovereignty and innovation leadership.
Fragmented Global Rules Add to the Pressure
Complicating matters further is the fragmented state of AI governance around the world. While the EU pursues a centralized, binding approach, the United States relies largely on voluntary standards and sector-specific rules. Meanwhile, Asia presents a wide spectrum of regulatory attitudes, from China’s prescriptive laws to Singapore’s lighter-touch, non-binding frameworks.
This patchwork of standards is forcing global tech firms to juggle multiple compliance regimes simultaneously, driving up operational costs and creating inconsistencies in product development strategies. For European companies already navigating economic and geopolitical headwinds, the regulatory burden could erode competitiveness.
Industry Warns of Risk Without Technical Readiness
Beyond legal ambiguity, the industry is also grappling with a broader issue: the lag in developing concrete technical standards to support the law. A promised code of practice was due in May but remains delayed amid disagreements over the specifics. Without these standards, even well-meaning companies are struggling to prepare their systems for compliance.
Experts note that the law demands assurances around robustness, accuracy, and resistance to manipulation, but stops short of detailing the technical solutions required. This gap leaves businesses operating in a legal grey zone, unsure of how to engineer their models to meet compliance thresholds.