Still only halfway through its implementation schedule, the biggest challenge for the European Artificial Intelligence Act is to remain relevant in the face of a rapidly evolving technology. Credit: Kelly Common | Unsplash The EU AI Act, the European Union’s artificial intelligence regulation, had worldwide impact when it entered force on August 1, 2024. But we’re only halfway through this act: another wave of its provisions take effect this weekend, with more to come next year. The act imposes prohibitions or conditions on AI systems depending on whether their impact is considered unacceptable, high, limited or minimal, with a for the rules. The prohibitions on unacceptable risk AI systems have been in effect since February 2, 2025. On August 2, 2025, measures relating to governance standards, general purpose AI (GPAI) models, and the sanctions regime, among others, will be activated. Certain exemptions mean that full implementation of the law won’t happen until 2030. A quiet beginning While some of the act’s measures — including the ban on unacceptably risky AI systems, the opening of the European AI office, and the provision of guidelines for GPAI models are already technically in effect, they have been largely invisible, according to Víctor Rodríguez, senior lecturer in the Department of Artificial Intelligence at the Polytechnic University of Madrid (UPM). “Since sanctions don’t start until August 2, 2025, we haven’t seen examples with ‘media impact.’ We will see them soon,” he said. Rodriguez alluded to other factors with an impact on regulation, such as the arrival of a new Trump administration in the White House, which may have impacted how the EU’s regulation is perceived. “The European Commission wanted to repeat the success of the Data Protection Regulation (GDPR), which served as a beacon for a world that largely tried to replicate the regulation, but this so-called ‘Brussels effect’ may not happen this time,” he says, citing tech giants’ division over the EU’s General Purpose AI Code of Practice that . Roger Segarra, a partner in the IT and intellectual property department at Osborne Clarke, is already seeing the impact of the act. “Some prohibited practices, such as the use of real-time remote biometric identification systems by public authorities for surveillance purposes, have had a deterrent effect since their inception, even before their effective implementation,” he said. He highlighted a certain disparity between companies in their assimilation of the regulation: While large companies “have voluntarily taken early action to adjust,” among smaller companies the situation is different. “For SMBs, a certain climate of tension has been generated due to the bureaucratic and economic burden involved in complying with the regulations and — for the moment — the scarce practical guidelines that exist,” he said. “Likewise, the level of implementation is uneven among the different member states,” he said, highlighting Spain’s role as the first EU country to create its national artificial intelligence supervisory authority, the . For others, though, the EU AI Act isn’t moving fast enough. “The first year has shown that AI is advancing faster than the legislative capacity to regulate it,” said Arnau Roca, managing partner at Overstand Intelligence, a consultancy specializing in AI. Roca said the regulation is “a necessary and positive first step towards regulating the use of artificial intelligence,” but saw challenges in its deployment due to the rapid evolution of the technology: “On a daily basis we see at Overstand Intelligence how some project requests border on the boundary between ethical and abusive.” In this regard, he spoke of the potential risks to humanity posed by tools such as real-time image recognition tools: “What makes AI law obsolete is not only the technology itself, but also the human capacity to quickly imagine applications that are not yet contemplated in the current regulation.” Rodriguez identified other “disruptive technological novelties” such as agents, real-time deep fakes or multimodal models that combine text, image and audio. “And yet, what really produces vertigo is to look towards those applications of AI that fall outside the law or are simply not affected by it, such as the actual deployment of military applications on battlefields or the use of these technologies in platforms such as for global mass surveillance.” Segarra highlighted the “establishment of absolute prohibitions respectful of fundamental rights and vulnerable people” as especially relevant, anticipating the proliferation of AI systems “invasive of people’s rights and freedoms”, such as subliminal manipulative methods or social scoring. Room for improvement The accelerated evolution of intelligent technologies means that the AI Act has to be thought about for continuous adaptation, as the three experts acknowledge. Rodriguez foresaw adjustments “in a couple of years”, which he said will be easier thanks to the very design of the regulation, which defines general principles and obligations but leaves the technical details to harmonized international standards. “Modifying an international standard, even with all the bureaucratic apparatus, is more agile than reopening the entire legislative process; and the discussions take place in a more technical than political environment.” This will surely facilitate its efficiency, “but that which falls outside its scope will remain a challenge.” The law needs dynamic mechanisms to adapt to unforeseen scenarios, said Roca. “An adaptable and agile regulation that allows constant updates without losing legal certainty is key.” But, said Segarra, political and social pressure could lead to the AI Act being formally revised before the end of the five-year period defined in the text itself. For him, the a posteriori and continuous review of the models is one of the “hottest aspects” of the regulation. He spoke of “the need to include a higher level of control once AI systems have been launched in the market, including the performance of fundamental rights impact assessments on a systematic basis.” Some aspects of the law can be improved, said Rodriguez including what it has to say about open source. “The obligations imposed by the regulation affect AI developers and users, also in open source projects. The definition of ‘provider’ is confusing and the impact on projects such as Llama is uncertain. How can we regulate modifications of open source models by third parties?” he asked. He also points to the administrative and compliance burden for small players in the market, something Segarra agreed with. “The imposition of regulatory burdens for SMBs requires the design of guidelines that allow the adoption of simplified procedures and the subsidized use of regulatory compliance tools.” There are exemptions, Rodriguez acknowledged, “but there is still no clarity on when they apply in practice.” The UPM professor added criticism from certain spheres of “the excessive accumulation of power by Brussels,” with the need to register high-risk systems in a centralized database as a problem. “Companies fear the leaking of trade secrets, some member states believe that registration should be at state level and not European, SMBs fear bureaucracy, others excessive accumulation of power,” he said. Despite having been in force for a year, the AI Act faces significant challenges if it is to keep up with the times. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe