AI and machine learning are gradually taking over routine and advanced tasks. Will managers and staff be locked out? Credit: Rob Schultz / Shutterstock AI is transforming the enterprise — and the data center is no exception. As companies scale their ambitions for AI, they’re discovering that traditional data center models can’t keep up. Power and cooling systems are under strain and legacy networks are creating data bottlenecks. Just as crucial, leaders must rethink staffing, governance, and culture to match AI’s relentless pace. We spoke to a number of practitioners and IT leaders to understand how data centers need to change to support the AI applications of the future. We came away with seven insights on how you can prepare your data centers — and the organizations that support them — for the AI era. 1. AI workloads push power and cooling systems to the limit AI data center infrastructure is hungry for water and power — but for reasons specific to how these systems are built and used. “AI workloads consume significant amounts of energy, due not only to the sheer computational power required, but also because the underlying hardware, especially GPUs, is extremely expensive,” says , CEO and founder of Scandinavian Data Centers. “This creates a strong incentive to keep the systems running as much as possible, which in turn drives the high electricity demand.” That demand plays out differently than in traditional data center environments. , CEO of NLM Photonics and a veteran of AI deployments at Microsoft and Meta, says AI reshapes compute and network dynamics by distributing workloads across more systems. “In AI there are small functions or processes spread across a very large number of or TPUs,” he says. This shift introduces a dual-network design: a scale-out network, like those in conventional data centers, and a scale-up network, which connects dense clusters of accelerators via “very high bandwidth pipes.” The combination of elevated compute throughput and massive data movement ramps up power usage and networking demands simultaneously. That’s a challenge even at smaller scales. director at , sees these strains firsthand in the UK education sector. “At one London academy trust, we had to re-spec an entire server room because the old UPS couldn’t support the draw from multiple GPU-heavy inference workloads running in parallel with MIS and CCTV processing,” he says. Most schools, he adds, “haven’t factored in the electrical and cooling loads that AI hardware demands.” In the enterprise, this translates to a fundamental shift in infrastructure planning. “As AI workloads scale, power becomes the new bottleneck — not compute,” says , CEO of Mission Critical Group. “These models demand high-density, high-availability power delivered faster and closer to the rack, with room for modular growth.” Still, Horn sees an opportunity in the challenge. “Modern often use efficient cooling methods,” he says. “This waste heat can be reused for district heating or agricultural applications like greenhouses.” The future of data center energy may not be about using less — it may be about using smarter. 2. Network infrastructure isn’t keeping up As organizations ramp up investment in AI infrastructure, many are running into limits they didn’t expect. “When we got our infrastructure ready for AI workloads, we realized that the bottleneck isn’t only computation; it can also be I/O and data pipelines,” says , tech lead of Data Analysis, Data Engineering & Data Governance at Albertsons. “I’ve seen teams buy costly GPUs only to have them sit about doing nothing while they wait for data to flow or be preprocessed.” According to a recent McKinsey , five types of organizations will drive and benefit from the tech industry’s massive investment in AI infrastructure. McKinsey In one instance, Puligundla’s team underestimated the need for fast local storage and efficient data loading. “We had to redesign our pipeline to leverage NVMe caching closer to the compute layer and moved parts of our data preprocessing upstream,” he says. “These changes had a bigger impact on training time than upgrading hardware.” That mismatch between AI’s needs and current infrastructure often extends to the network fabric itself. “Legacy data center networking technologies just aren’t optimized to support the ultra-low latencies and high reliability and scalability to match unprecedented volumes of data, network responsiveness, and security demands of this AI era,” says , Nokia’s CMO for Network Infrastructure. Gulyani says that many organizations are starting to address this by deploying high-capacity, low-latency, lossless data center fabrics tailored to AI. Nokia, he says, has worked with hyperscaler nScale and cloud provider CoreWeave on next-generation interconnect solutions, including 800G IP and optical networking. “Now is the time for the telecoms industry to rethink network design — prioritizing scalability, flexibility, and automation — to prevent them from becoming a bottleneck in AI strategies,” Gulyani says. 3. Cloud and hybrid storage are key parts of the puzzle As AI workloads evolve, even organizations committed to on-premises infrastructure are leaning into public cloud and hybrid storage strategies. , executive vice president and head of Americas Delivery at Infosys, says that successful AI data center modernizing efforts entail “shifting workloads to the public cloud, and adopting hybrid storage. These moves boosted agility and cut energy use and cost.” This blend of on-premises and cloud-based compute isn’t just about performance — it’s also about access. For organizations without massive infrastructure budgets, a hybrid approach can be the difference between riding in the AI wave or being left behind. Classroom365’s Friend has worked with customers to navigate these limitations. “A lot of the schools we help, especially in under-funded councils, don’t have the means to rebuild kit or hire local AI brains,” he says. “But they’re not excluded.” Instead, they’re finding success with a pragmatic model. “What’s proving to work for those cases best is a hybrid approach, like outsourcing the heavy lifting to cloud-based inference services while keeping local infrastructure thin but rock-solid,” Friend explains. “Think of it as reserving GPU bursts from a cloud provider instead of buying racks that you cannot control.” This approach requires a mindset shift as much as a technology shift. “You don’t outbuy the problem. You merely know where to plug in and where to make someone else do the horsepower,” Friend says. For many deployments at small or under-resourced organizations, the real challenge isn’t raw compute; it’s integration and connectivity. Hybrid strategies may prove to be the most versatile and inclusive path forward. 4. Data governance and ethical oversight matter more than ever CIOs need to go beyond a focus on hardware performance alone and think about the data in their data centers that will become grist for AI applications — and about the larger regulatory and ethical implications of that data’s use. “You need transparency in your data pipelines, robust model versioning, and auditable workflows to ensure models are fair, explainable, and compliant with emerging regulations,” says , assistant professor of information systems at Warwick Business School. Even if the models themselves remain black boxes, Singh says that the systems around them can — and should — be made fully accountable. That means tracking where data comes from, how it’s processed, and what goes into training; saving each model version with complete documentation; and logging deployment activity to support traceability. “AIready infrastructure is as much about trust and accountability as it is about speed and scale,” she says. Achieving that trust starts with strong data governance. “How a company responds to questions about its data inventory, ownership, and quality reflects its AI readiness,” says , CEO of Altum Strategy Group. He recommends standing up a distinct data governance organization that includes not just IT and business leaders but also representatives from the data center. “For companies with data centers, data center members will likely be part of the data governance committee and will be responsible for executing the company’s data governance policies and practices.” , CEO and founder of Gradient AI, emphasizes that data completeness and consistency are table stakes for responsible AI. “A client recently shared a dataset where 40% of the most critical fields were missing data or incomplete — this is not uncommon,” he says. Without high-quality, well-governed data, even the most sophisticated AI systems risk producing biased, incomplete, or meaningless results. 5. Your data center staffing needs a skills upgrade AI is not only stressing data center hardware; it’s also putting pressure on the people who keep those systems running. “As AI compute growth is extraordinary, there is currently a significant skills gap and we see that the talent pool is struggling to keep up,” says Scandinavian Data Centers’ Horn. He points to , in which 71% of respondents cite a “lack of qualified staff” as a concern. The problem spans both traditional infrastructure roles and the new skill sets required to support AI. Classroom365’s Friend has seen this firsthand in the education sector. “Your usual sysadmin likely hasn’t touched NVIDIA Triton or Kubernetes operators for model lifecycle management,” he says. Friend emphasizes that in his work the goal isn’t to turn infrastructure staff into machine learning experts, but to give them the right foundational skills. “We’ve trained our team on container management and basic model orchestration so they can support AI tools without feeling like they’re flying blind,” he says. “This wasn’t about teaching deep learning, but about maintaining infrastructure that doesn’t break when an AI tool plugs in.” , chief business officer at CloudX says that at enterprise deployments, upskilling must go deeper than a few new technical certifications. “Traditional infrastructure roles are no longer enough,” he says. “You need people who understand both the physical layer and how AI workloads interact with that stack.” Abulafia also notes the growing importance of automation and collaboration. “AI amplifies the need for automation, and teams need to collaborate across domains: data science, DevOps, and IT,” he says. “The biggest challenge for leaders in this new context is not only planning for hiring but also planning for training the team.” 6. Smart organizations implement cross-functional governance structures Collaboration across domains becomes even more essential when enterprises try to operationalize AI broadly across the business. The smartest orgs are doing so by building dedicated cross-functional groups to align their data centers with the new era of AI. “You need a mix of infrastructure, operations, customer service, sales, data science, programmers, cybersecurity, compliance, customer input and business leaders at the table,” says , CEO and founder of Best Practice Institute. “Not just to plan, but to co-own and co-create success.” These AI councils “avoid siloed systems and shadow projects — all of which break down communication and erode trust in the company.” The need for cross-functional alignment is particularly acute because AI workloads blur the lines between disciplines. “AI workloads demand more collaboration between software and infrastructure teams than traditional workloads ever did,” says Albertsons’ Chandrakanth. “Software engineers need visibility into hardware constraints, and ops teams need to understand the behavior of training jobs or inference services.” Puligundla advocates for making data center environments more “developer-accessible,” with observability tools, APIs, and infrastructure-as-code practices tailored for machine learning workflows. That kind of collaborative infrastructure not only supports current models but also ensures adaptability for whatever comes next. 7. Leadership and good planning are critical for the transition “Getting your data center AI-ready isn’t just about infrastructure — it’s about leadership,” says Best Practice Institute’s Carter. “AI without clarity becomes shelfware.” Too often, companies dive into AI initiatives without clearly defining what they’re trying to accomplish or how success will be measured. “Most companies rush to implement before aligning teams around the problem they’re solving, how decisions will be made, and what success looks like,” Carter says. In his experience, the most effective leaders begin not with tools but with people — investing in culture readiness, governance, and change alignment. It’s a lesson Infosys’s Adya has seen play out repeatedly. Many AI data center modernization efforts stall, he notes, because of “underestimating the complexity of legacy integration and lacking a clear migration roadmap” — failures that point to the need for strong planning and structured change management. Just as important is understanding that AI isn’t a fixed deployment — it’s an ongoing transformation. “AI workloads demand ongoing iteration and humans in the loop,” Carter says. “That means IT teams must work more like product teams — fast cycles, sprints, continuous learning, connecting tech to business value, staying close to the customer.” As leaders, he adds, “execs must create time and space to educate and coach the broader organization — not just on the tech, but on how it changes workflows and decisions. Transparency is key. People fear what they don’t understand.” Helping your staff understand the new world of AI may be the most important thing you can do to prepare your data center for what’s to come. More on AI and machine learning: Reskilling IT for the AI era 9 IT projects primed for machine learning The future of ERP is AI 5 artificial intelligence trends that will dominate 2018 6 secrets of successful chatbot strategies How AI is revolutionizing business productivity Machine learning success stories: An inside look 9 machine learning myths SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe