For those who manage enterprise IT ‘what can AI do for us’ has to be balanced against ‘what might AI do to us.’ More than 30% of respondents to noted that a lack of governance and risk management solutions was the top barrier to adopting AI. Meanwhile in the recent , 47% of organizations cited IT governance and security as key hurdles to adoption. Not using AI to transform your organization may be a business risk. But using AI adds new, unknown, risk. Non-IT people tend to commoditize AI. To see it as akin to magic stardust to be sprinkled on sub-optimal processes, products, and – maybe – people. Reality is complex and risky. Here are five things IT leaders should be thinking about as they contemplate that risk. You can view our conversation here, or in the YouTube player below: 1. Managing AI risk: Inside or out? Projects that use AI fall into two buckets. There is use of AI to accelerate and scale internal processes. This optimizes something that already happens. Using AI tools to support coding, or to replace downstream production tasks, for example. Typically, this means working with an AI vendor to support an existing use case. There are risks: data privacy, vulnerabilities that could lead to a breach, ethical risk. But this is a relatively low, knowable level of risk than that found in the other bucket. That is, using AI to create new products and services. This puts AI into the center of process and output. It exposes AI to customers. Likely, it involves doing something new, developing capabilities either in house or through a partner. If working through a partner, the organization is accountable for its customer’s experience and data but sits at the front of a supply chain it doesn’t control. This is a significant step up in risk, with outcomes that can be predicted but not known. Business leaders need to understand the distinction to mitigate risk and balance risk and reward. 2. Managing AI risk: AI, or AI? Another nuance: what technology are you using, really? The term ‘AI’ covers a spectrum that ranges from machine learning through to more proactive technologies such as generative AI and agentic AI. With gen AI, something new is being created by the AI feeding from a source large language model. Agentic AI means a task is being done by the AI, interacting with other agents or humans – some of whom may sit outside the organization. Both cases introduce higher levels of risk than more traditional AI tech doing things faster. Is your organization prepared for that? 3. Managing AI risk: Where are you at? Around the world regulations regarding the use of artificial intelligence differ. How much this impacts your organization relates not only to where you are – but also to where you transact with customers. It’s GDPR on steroids. IT leaders in India tell us that strict data breach reporting regulations are holding organizations back from plunging into AI investments. In the UAE regulations prohibit companies from using specific cloud vendors (including AWS), which adds a whole other wrinkle to use of AI. In the US and the UK, different flavors of politics have entered the conversation. The US and the UK are regions in which AI – and deregulation – is seen as an economy accelerator. They sit outside more heavily regulated regions such as the EU, but needing to be conscious of international standards, and manage risk with that in mind. Operating internationally is an unholy mess for IT leaders working with AI. Lots of expectations and risky unknowns to manage. 4. Managing AI risk: All together now? In some organizations, the 娇色导航leads AI projects that implement a business leadership strategy. The CISO across the hall is trying to pump the brakes because of the risks entailed. We reported recently that a majority of CFOs are much more negative on AI than are CEOs and even CIOs. A big part of the reluctance is because CFOs hate risk. IT leaders are in an invidious position. Line-of-business leaders have high expectations of AI and are in a rush to implement it. But IT must implement and manage risk. They need explicit buy-in and accountability for AI strategy from all their peers. 5. Managing AI risk: Data, skills, infrastructure For any AI project to succeed IT leaders need to put the right tools in place. Data, infrastructure (cloud, connectivity) and security are key pillars. If any one of those elements is sub-optimal, risk is introduced that may be intolerable. Then there is the human factor. The perfect skillset that eliminates risk doesn’t exist. You need informed accountability and responsibility from employees across the board. And every IT leader needs someone with the right level of authority who can focus principally or solely on risk management and AI. No risk, no reward Ultimately nothing worthwhile is without risk, and certainly not in the world of AI. But careful consideration and discussion of these five key risk vectors will support successful risk management. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe