娇色导航

Our Network

Grant Gross
Senior Writer

Moving AI workloads off the cloud? A hefty data center retrofit awaits

Feature
Jul 16, 20256 mins

Data security and cost control have IT leaders rethinking AI in the public cloud. But prepping a local data center for AI can be expensive and involved, with more infrastructure needed than a couple of GPUs.

Team of young multi-ethnic specialists in glasses standing at server and using tablet while managing network server together in data center
Credit: SeventyFour / Shutterstock

As their AI programs mature, many IT leaders appear to be moving AI workloads from the public cloud to the private cloud or on-premises environments to control costs and protect data privacy.

But data center experts warn that huge and hidden costs may await CIOs attempting to update legacy data centers for the AI era, as preparing for AI workloads can be much more involved than adding a couple of GPUs.

It’s true that some organizations won’t need major upgrades to repatriate their AI projects, but those anticipating heavy AI workloads could spend tens of millions of dollars to retrofit legacy data centers, experts say. The price tag for prepping a co-location center or on-prem site will likely start at a few hundred thousand dollars, with larger upgrades going up from there.

Still, the shift away from the public cloud is happening, as CIOs get a handle on their AI workload needs. On-premises infrastructure or co-located data centers can offer more predictable pricing models than public cloud services based on per-use pricing, some IT leaders are finding.

Not your father’s data center

There are several factors to consider when renting or owning your own AI infrastructure. Starting small may work in limited circumstances, but most organizations have larger plans than just a couple of AI one-offs, says , vice president of innovation and data center at data center power and cooling vendor Schneider Electric.

Legacy data centers can be retrofitted for AI workloads, Carlini says, but it can be an involved process. To make data centers ready for substantial AI workloads, cooling, power, and other upgrades may be needed, he adds.

“If you have a very specific use case, and you want to fold AI into some of your processes, and you need a GPU or two and a server to do that, then, that’s perfectly acceptable,” he says. “What we’re seeing, kind of universally, is that most of the enterprises want to migrate to these autonomous agents and agentic AI, where you do need a lot of compute capacity.”

Racks of brand-new GPUs, even without new power and cooling infrastructure, can be costly, and Schneider Electric often advises cost-conscious clients to look at previous-generation GPUs to save money. GPU and other AI-related technology is advancing so rapidly, however, that it’s hard to know when to put down stakes.

“We’re kind of in a situation where five years ago, we were talking about a data center lasting 30 years and going through three refreshes, maybe four,” Carlini says. “Now, because it is changing so much and requiring more and more power and cooling you can’t overbuild and then grow into it like you used to.”

Millions of dollars in build costs

A greenfield build of an AI-ready data center could cost $11million to $15 million per megawatt, not including compute power, says , founder and CEO of data center sale and lease company WiredRE.

CIOs with in-house AI ambitions need to consider compute and networking, in addition to power and cooling, Thompson says.

“As artificial intelligence moves from the lab to production, many organizations are discovering that their legacy data centers simply aren’t built to support the intensity of modern AI workloads,” he says. “Upgrading these facilities requires far more than installing a few GPUs.”

Rack density is a major consideration, Thompson adds. Traditional data centers were designed around racks consuming 5 to 10 kilowatts, but AI workloads, particularly model training, push this to 50 to 100 kilowatts per rack.

“Legacy facilities often lack the electrical backbone, cooling systems, and structural readiness to accommodate this jump,” he says. “As a result, many CIOs are facing a fork in the road: retrofit, rebuild, or rent.”

Cooling is also an important piece of the puzzle because not only does it enable AI, but upgrades there can help pay for other upgrades, Thompson says.

“By replacing inefficient air-based systems with modern liquid-cooled infrastructure, operators can reduce parasitic energy loads and improve power usage effectiveness,” he says. “This frees up electrical capacity for productive compute use — effectively allowing more business value to be generated per watt. For facilities nearing capacity, this can delay or eliminate the need for expensive utility upgrades or even new construction.”

Retrofitting can be cheaper

CIOs should expect the cost of retrofitting a data center, as opposed to greenfield builds, to cost $4 million to $8 million per megawatt, not including hardware, adds , CEO and cofounder at Northwest AI Consulting.

While AI training racks can use 80kW to 120kW currently, the industry roadmap is headed toward 1MW racks by 2030 he notes.

“That blows past the assumptions legacy data centers were built on,” he says. “So upgrades aren’t just about GPUs, they require overhauls in power distribution, rack layouts, liquid cooling or immersion, high-speed interconnects, and even floor load engineering.”

Mayham advises CIOs to start with a structural and power audit. “Some older raised floor environments just can’t handle 4,000- to 8,000-pound AI racks,” he says. “We’ve seen retrofits killed because a site couldn’t meet load-bearing requirements or didn’t have a viable utility interconnect.”

AI training and inference have different infrastructure profiles, he adds. “You can’t make smart upgrade decisions without knowing your model type, density targets, and whether hybrid or colocation options make more sense,” he says. “Know your AI mix.”

Retrofitting costs vary widely depending on the size and age of the facility, but CIOs should start by understanding the AI workloads they plan to run, agrees , president and CEO of AI workload software provider PEAK:AIO.

CIOs should also know where the AI-related data lives, how fast it needs to move, and how models will scale, he adds.

“AI infrastructure is moving toward decentralized, federated architectures that bring intelligence to the data, not the other way around,” he adds. “Traditional, centralized models struggle with this shift. Flexibility, scalability, and data governance must be designed in from the start.”

[ Download the . ]

Grant Gross
Senior Writer

Grant Gross, a senior writer at CIO, is a long-time IT journalist who has focused on AI, enterprise technology, and tech policy. He previously served as Washington, D.C., correspondent and later senior editor at IDG News Service. Earlier in his career, he was managing editor at Linux.com and news editor at tech careers site Techies.com. As a tech policy expert, he has appeared on C-SPAN and the giant NTN24 Spanish-language cable news network. In the distant past, he worked as a reporter and editor at newspapers in Minnesota and the Dakotas. A finalist for Best Range of Work by a Single Author for both the Eddie Awards and the Neal Awards, Grant was recently recognized with an ASBPE Regional Silver award for his article “Agentic AI: Decisive, operational AI arrives in business.”

More from this author