Where and how you process data can make or break your gen AI initiatives. Credit: Rob Schultz / Shutterstock 娇色导航 Irrespective of industry, the rise of gen AI has reignited boardroom discussions about cloud strategies. While the potential is big, so are the complexities — especially for CIOs suddenly facing budget overruns, security risks, and tangled hybrid environments.“Companies are trying everything, which leads to enormous costs,” says Juan Orlandini, CTO North America at Insight. “If you’re running gen AI in the public cloud, the costs add up quickly. You get charged for everything — compute, storage, and all the network traffic.” Juan Orlandini, CTO North America, Insight Insight Those surging expenses reflect a broader reality. Gen AI workloads are unpredictable, data-intensive, and often experimental, according to Bastien Aerni, VP at GTT, a networking and security as a service provider. “CIOs often don’t know how successful a given initiative will be,” he says. “Overinvesting is risky, and underinvesting limits scalability and user experience.” This cost is often amplified by data gravity in that once a large amount of data accumulates in one place, it becomes inefficient — both technically and financially — to move it elsewhere for processing. At the same time, cloud environments can unintentionally create data swamps through redundancy and uncontrolled sprawl. Orlandini warns of hidden costs when separating compute and data. “If your data is on prem and you use a cloud-based AI service, you either need high-speed connections or you need to duplicate the data in the cloud. Both are costly.” But modified hybrid and multi-cloud strategies present trade-offs of their own in managing compliance in global environments, avoiding vendor lock-in, and balancing performance with cost. “The main risk is lock-in,” says Scott Gnau, head of data platforms at data tech provider InterSystems. “Vendor lock-in can derail your long-term strategy. If your tools are tied to one vendor’s stack, switching becomes difficult — and expensive.” Security and compliance further complicate matters. For Boris Kolev, global head of technology at JA Worldwide, the youth-serving NGO operating in 115 countries, these concerns are paramount. “We deal with student data across many jurisdictions,” he says, “so we have to comply with everything from GDPR to local youth protection laws.” He adds that any AI service they use must undergo rigorous security vetting before deployment. Work through the new set of trade-offs To navigate this complex terrain, CIOs must reconsider the foundational trade-offs in cloud strategy. The optimal solution increasingly lies in architectural flexibility, balancing proximity, performance, security, and vendor independence. All four IT leaders featured here stress the importance of processing data close to its source. Whether on prem, at the edge, or in regional clouds, moving data introduces both latency and cost. “Real-time AI only works if you’re close to the data source,” says Gnau. “If your model runs on data where it resides, latency drops and costs stay under control.” Scott Gnau, head of data platforms, InterSystems InterSystems Moving data around also increases security risks, and dispersed organizations such as JA Worldwide are especially vulnerable. “We prioritize keeping data local and minimizing access wherever possible,” says Kolev. “Our AI orchestration uses local models and data centers to comply with protections like GDPR and CCPA.” Kolev adds that JA Worldwide uses metadata-driven orchestration to decide the most efficient location for each task. “It’s about selecting the most efficient dataset based on proximity and response time,” he says. This approach reduces unnecessary data movement, improves compliance, and keeps costs manageable. In some cases, this means using regional cloud providers or edge infrastructure. GTT’s Aerni cites an example: “A large UK construction firm uses AI on the ground to validate builds against contracts. That validation can’t wait on centralized processing. These types of real-time applications make edge computing indispensable.” To further avoid vendor lock-in, a flexible architecture is essential. “Use platforms or frameworks that allow you to swap out AI backends,” says Insight’s Orlandini. “Build an abstraction layer that insulates your applications from specific vendors.” Aerni echoes this view. “We’re focused on enabling innovation through a high-performance and flexible platform that lets us safely test and activate new technologies in a vendor- and technology-agnostic manner.” Kolev adds that his approach is shaped by budget-conscious decision-making from his experience in Eastern Europe. “We’ve always been cautious about vendor lock-in, and lean toward open-source solutions first,” he says. “We often build custom architectures using a patchwork of smaller providers.” Open standards and open-source tools aren’t just about ideology, they’re also good risk management. “At InterSystems, we stick to open standards and interchangeable architectures,” Gnau says. “That gives us and our clients the ability to move when business needs change.” For JA Worldwide, the choice is often driven by cost. “We rely heavily on open-source or discounted resources,” says Kolev. “We start with open source and only pay for licensed platforms when absolutely necessary.” AI projects are notorious for runaway costs, and many operate outside normal cost-control frameworks. Orlandini stresses the need for FinOps to evolve. “AI projects are often siloed,” he says. “Organizations need to apply the same discipline to AI as they do to traditional workloads.” Boris Kolev, global head of technology, JA Worldwide JA Worldwide Monitoring usage patterns is key. JA Worldwide, for example, tracks reads per minute and latency to determine the intensity of activity. “When usage spikes, we may manually activate a kill switch to prevent cost overruns,” Kolev says. “Long term, we want to automate this with AI.” Aerni adds that CIOs need to equally know the customer and data, and that poor data quality inflates storage costs without delivering value. “Moving data from point A to B can be expensive, especially on public clouds,” he says. AI’s hunger for data also heightens the need for greater governance. “Companies accumulate data that’s barely usable,” says Aerni. Knowing where your data is and what it’s worth is now a 娇色导航imperative. Still in the formative days of AI use cases Despite the hype, or because of it, most enterprises are still in the early phases of gen AI deployment. “I haven’t seen many full-scale deployments yet,” says Gnau. “Most are still in pilot or early-stage implementation.” Many organizations are beginning with lightweight, low-risk applications like transcription tools, document summarization, and customer service chatbots before expanding to more strategic use cases. These early efforts are helping to validate ROI and build internal confidence. Some companies are layering AI onto legacy systems through APIs or embedding RAG strategies into data workflows. These patterns allow them to augment current infrastructure rather than replace it outright, lowering the barrier to entry. As experience grows, more use cases will move into critical operations, especially where data quality, latency, and regulation are well understood. That said, some use cases are already delivering tangible value. JA Worldwide is testing a “pitch master” AI that helps students improve entrepreneurial presentations. “The AI analyzes tone, content, posture, and business model quality offering feedback for improvement,” says Kolev. At GTT, observability and agentic AI are gaining traction, too. “Agentic AI can support service assurance functions like customer service and ticket logging,” Aerni explains. “Observability helps anticipate outcomes like helping an airline optimize staffing based on traveler flows, for instance.” Bastien Aerni, VP, GTT GTT These examples hint at a shift from speculative AI to purpose-built deployments that augment existing operations. As CIOs move from testing to scaling, the emphasis will likely shift from possibility to performance. Architect for change Gen AI is rewriting the rules for cloud infrastructure. As CIOs scramble to adapt, the old playbooks are no longer sufficient. Architectures must now balance performance, cost, flexibility, and compliance all while anticipating future innovations and regulations, so CIOs should revisit their infrastructure decisions more frequently than before. With gen AI models evolving rapidly, what was a best practice a year ago may now be an expensive bottleneck. Flexible architectures and tooling allow for faster adaptation, and cloud strategies should also be guided by an understanding that different stages of AI adoption — experimentation, validation, and production — require different infrastructure approaches. The ability to seamlessly move across those stages without costly overhauls is becoming a defining trait of successful AI infrastructure planning. “A new tool may be 10 times better tomorrow,” says Orlandini. “CIOs should be ready to pivot.” That means planning for change, not just building for today. “It’s no longer enough to have a three-year IT plan,” Aerni adds. “You need to architect for change.” Whether it’s cost governance, location-aware compute, or modular architectures, the message is clear: the winners in the gen AI era will be those who prepare not just to scale, but to evolve. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe