As soon as the EU published its Code of Conduct on the use of AI as a supplement to the EU AI Act, criticism rained down from all sides ¡ª with the enforcement deadline two weeks away. Credit: Sidney van den Boogaard ¨C Shutterstock.com With the (GPAI Code of Practice), the EU has published its first code of conduct for regulating general AI. It is intended to simplify compliance with the EU AI Act. The guidelines will enter into force on Aug. 2, 2025. The EU intends to implement them in practice starting in 2026. Those guidelines, however, are not without controversy and have been criticized by lobby groups, CEOs and CIOs, and NGOs. The Code of Practice The Code of Practice consists of three chapters: Transparency, Copyright, and Safety and Security. The provides a user-friendly template for documentation. It is intended to enable providers to easily document the information required to comply with the AI Act’s obligation on model providers to ensure sufficient transparency (Article 53 of the AI Act). The offers providers practical solutions to comply with the AI Act’s obligation to develop a strategy to comply with EU copyright law (Article 53 of the AI Act). The outlines concrete, state-of-the-art practices for addressing systemic risks, i.e., risks posed by the most advanced AI models. Providers can rely on this chapter to meet the AI Act’s obligations for providers of general-purpose AI models with systemic risks (Article 55 of the AI Act). It applies only to providers of general-purpose AI (GPAI) models with systemic risk. Criticism from Bitkom German digital association Bitkom is still relatively diplomatic in its . The association sees it as an opportunity to create legal certainty for the development of AI in Europe. Furthermore, it has been simplified compared to initial drafts and is more closely aligned with the legal text, making it easier for companies to apply. Susanne Dehmel, member of the Bitkom management board, welcomes the guidelines, but sees some critical points. Bitkom “The Code of Practice must not become a brake on Europe’s AI position,” warns Susanne Dehmel, member of Bitkom’s management board. However, for the AI Act to be truly implemented in practice, Dehmel adds, “very comprehensive but vaguely worded audit requirements must be improved and the bureaucratic burden significantly reduced.” Bitkom is critical of the tightened requirement for open risk identification for very powerful AI models. What EU CEOs say More than 45 top managers also offered a clear message in an . They warn that the EU is losing itself in the complexity of regulating artificial intelligence — and thus risking its own competitiveness. The regulations are unclear in some areas, and contradictory in others. The managers are calling for the implementation of the EU AI Act to be postponed by two years. The letter was initiated by the lobby group EU AI Champions Initiative, which represents around 110 EU companies. Signatories include top executives at Mercedes-Benz, Lufthansa, Philips, Celonis, Airbus, AXA, and the French BNP Paribas, to name just a few. SAP and Siemens call for new AI Act Siemens CEO Roland Busch and SAP CEO Christian Klein were absent, feeling the criticism didn’t go far enough. In an they called for a fundamental revision of the EU AI Act, seeking a new framework that promotes innovation rather than hinders it. For Busch, the AI Act, in its current form, is “toxic to the development of digital business models.” For Siemens CEO Roland Busch, the AI Act in its current form is toxic for digital business models. Siemens AG An NGO perspective , an NGO that sees itself as a representative of civil society, also has criticism for the new guidelines. The NGO is particularly concerned that US tech providers managed to weaken and water down key points in a closed session. Nick Moës, executive director of The Future Society, says: “This weakened code puts European citizens and businesses at a disadvantage and misses opportunities to strengthen security and accountability worldwide. It also undermines all other stakeholders whose commitment and efforts for the common good have remained overshadowed by the influence of US Big Tech.” Four points of criticism The NGO is particularly critical of the following four points: The AI office receives important information only after the product has been launched on the market. Providers only share the model report with risk assessment after deployment — following the “publish first, then question” approach. This allows potentially dangerous models to reach European users unchecked. In the event of violations, the AI Office must initiate a recall — which can fuel unfounded criticism of innovation. No more effective whistleblower protection. Information from within is crucial in capital- and market-driven industries. In a world where AI companies know a lot about users, but users know little about them, internal whistleblowing is essential. The AI office must be a safe haven and offer the same standards of protection as those required by the EU Whistleblower Directive. No mandatory plans for emergency scenarios. Such protocols are standard in other high-risk areas. Damage can also spread extremely quickly in GPAI. Therefore, providers must be required to plan emergency response and damage mitigation strategies well in advance. Providers have extensive decision-making power in risk management. Through lobbying, results-based rules were introduced. Providers are now allowed to identify risks themselves, define thresholds, and conduct continuous evaluation — proving that they deserve this trust. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe