娇色导航

Our Network

Barriers to running AI in the cloud – and what to do about them

BrandPost By Jeff Miller
May 20, 20253 mins
Artificial IntelligenceMachine Learning

Organizations face cloud AI deployment challenges including unpredictable costs and security concerns. Holistic, on-premises solutions can offer competitive alternatives with fast deployment, scalability, and long-term cost efficiency.

Credit: Shutterstock/Andrey_Popov

As organizations rush to deploy and run AI to power not just pilots, but also key use cases supporting vital business functions, the cloud would appear to be the best environment for deployment. At least at first glance.

After all, the cloud has unlimited extensibility, with the ability to expand or reduce resources on demand. It doesn’t require capital expenditures to deploy gear and is accessible from anywhere. Overall, one would imagine that AI deployment in the cloud would be cheaper and simpler to manage than it would on-premises.

For some AI deployments, that may be true. But many enterprises are discovering that there are significant challenges to deploying AI in the cloud. Foundry’s most recent surveyed senior IT executives about the challenges stalling cloud adoption. The No. 1 barrier was cost, cited by nearly half (48%). Security and compliance concerns were the second most significant obstacle (35%) while integration and migration challenges came in third (34%).

The survey drilled down into what, specifically, was driving IT’s cost and budget concerns, and found that the largest issue was unpredictability (34%) followed closely by the complexity of cloud pricing models (31%). Compounding these problems, IT leaders said, was the fact that they lacked cost optimization strategies (25%) and visibility into cloud usage (23%). They also noted that transferring data was extremely expensive (25%).

Simply put, IT leaders worried about how they’d be able to effectively manage and control cloud costs. In the long term, they feared it might be more costly than working on-premises.

Of course, cost isn’t everything. The pressure to get AI up and running quickly could be seen as a big advantage with the cloud, where all resources are available on demand. Most hardware vendors sell only parts of the full solution for AI, which means IT has to spend time, money, and effort selecting, deploying, and integrating them all to enable the desired use cases.

Note the emphasis on most.

Organizations can accelerate on-premises AI infrastructure deployments and see rapid time to value by working with a vendor that takes a holistic approach. These vendors will handle every step – from system design to cooling, installation, power efficiency, and software validation – so the organization’s IT team can focus on producing results, not overcoming roadblocks.

ASUS is an example of a holistic AI infrastructure vendor. Their ASUS AI Pod is a fully deployed, ready-to-run AI infrastructure with the power to train and operate massive AI models, all delivered in just eight weeks. Specifically, ASUS delivers a full rack with 72 NVIDIA Blackwell GPUs, 36 NVIDIA Grace CPUs, and 5th-gen NVIDIA NVLink, which enables trillion-parameter LLM inference and training. It’s a scalable solution that supports liquid cooling and is ideal for a scale-up ecosystem. Plus, it includes full software stack deployment and ongoing support.

So, the decision of where to deploy AI — the cloud or on-premises — isn’t necessarily a slam dunk for a hyperscale solution. With the right vendor, on-premises deployment can be fast, performant, scalable, and cost-efficient.

.