娇色导航

Our Network

Grant Gross
Senior Writer

Executives love their AI rollouts, but employees aren’t buying it

Feature
Aug 8, 20255 mins
Artificial IntelligenceChange Management

Less than half of workers see their organizations’ approach to gen AI as strategic and successful. CIOs must embrace the change management challenge to ensure clear direction and organization-wide AI literacy.

Tired, yawn and sleepless with a business man sitting in a meeting or presentation with his team for development. Yawning, exhausted and bored with a male employee suffering from insomnia at work
Credit: PeopleImages.com - Yuri A / Shutterstock

Most company executives believe their AI rollouts have been successful, but they apparently haven’t asked their employees.

Nearly three quarters of executives believe their companies’ approach to generative AI is well controlled and highly strategic, with a similar percentage saying their companies have been successful in using gen AI over the past year.

But less than half of employees surveyed agree. The huge disconnect between executive and employee attitudes about gen AI have serious implications, with employee trust in AI and long-term adoption in doubt.

Executives and employees share some of the burden for these perception gaps, says Writer CEO .

“So much of the disconnect is a lack of understanding about what needs to happen right for the real and lasting change,” she says, adding that employees bear some responsibility for learning AI tools. “The ability to learn new skills is just absolutely paramount here, and employees need to own up to it.”

Executive leadership, however, is not off the hook; they need to share their vision for AI-based transformation, train their workers, and realize that their strategic intentions and experience with AI may not translate down to many employees, Habib says.

And with only a third of employees believing their organizations have a high level of AI literacy, while nearly two-thirds of executives do, change management efforts are sorely needed, she says.

“When you’re somebody at the top of the food chain, you’ve got the best tools, the best access, the best employees on your team, you have an executive assistant, and when you can see how much you’re able to crunch and process and see and how good the insights are, it is such an eye-opener,” Habib says. “But if you’re an employee and you’re being asked to do AI as a side gig on the weekend as an extracurricular with no incentive, no extra pay, why would you do it?”

Tools and policies aren’t enough

, an instructor on AI in marketing at Harvard University, sees the AI disconnect in her business consulting work. “Leaders often believe their AI strategy is effective because they’ve invested in tools or set policies,” she says. “But many employees … are the ones navigating tool limitations, workflow friction, and a lack of practical training or access.”

Employees will find work-arounds when their companies’ AI visions don’t match their experience on the ground, she adds.

“When employees don’t feel supported, they’re more likely to shadow-IT their way into productivity — using unsanctioned tools, often paid for out of pocket,” Inge says. “That not only creates compliance risks, but it means companies aren’t benefiting from the full strategic value of AI; it becomes piecemeal and uneven rather than systemic and scalable.”

Indeed, the Writer survey found that 35% of employees use their own money to pay for gen AI tools they use at work. About 15% of workers pay $50 or more a month out of their own pockets.

A lack of trust also leads to more sinister implications. Nearly a third of employees surveyed admitted to “sabotaging” their companies’ AI strategies, by, for example, entering company information into non-approved gen AI tools, using non-approved AI, or not reporting AI security leaks. Some employees may also be worried about losing their jobs to AI.

All kinds of problems

When employees don’t trust the AI efforts at their companies, security becomes impossible to manage, data governance breaks down, and AI literacy gaps widens because formal training doesn’t align with real-world usage patterns, says , CEO of AI software firm IgniteTech and network management vendor GFI Software.

“When executives think they’re succeeding while employees struggle with inadequate tools, you get organizational friction that kills productivity gains,” he adds. “We’ve seen companies spend millions on AI initiatives while their workforce resorts to unauthorized tools because the approved solutions don’t match their workflows.”

This disconnect is a leadership problem, not a technology issue, and executives need to measure engagement, not just productivity, Vaughan adds.

“If your employees aren’t curious about the tools you’re implementing, you haven’t actually transformed anything,” he says. “Companies that don’t bridge this gap will find themselves outrun by organizations that put employees and executives on the same page about AI capabilities and limitations.”

Leadership can fix the problem, but it takes time and effort, Vaughan says.

“Most companies quit too early,” he adds. “At IgniteTech, it took over a year for the culture shift to stick. People who were skeptical about AI are now coming to leadership meetings with ideas for new applications. Teams that used to escalate every decision are solving problems independently.”

To make progress, executives need to get their hands dirty, he adds. Companies can help by making early adoption of AI voluntary and creating AI office hours where employees can get help, he suggests.

“I spent my first month using every AI tool we implemented, not watching demos or getting briefings, but doing work with them and writing emails with AI assistance, using it for research and letting it help with strategic planning,” Vaughan says. “When your team sees you struggling with the same learning curve they are, it changes the dynamic completely.”

Grant Gross
Senior Writer

Grant Gross, a senior writer at CIO, is a long-time IT journalist who has focused on AI, enterprise technology, and tech policy. He previously served as Washington, D.C., correspondent and later senior editor at IDG News Service. Earlier in his career, he was managing editor at Linux.com and news editor at tech careers site Techies.com. As a tech policy expert, he has appeared on C-SPAN and the giant NTN24 Spanish-language cable news network. In the distant past, he worked as a reporter and editor at newspapers in Minnesota and the Dakotas. A finalist for Best Range of Work by a Single Author for both the Eddie Awards and the Neal Awards, Grant was recently recognized with an ASBPE Regional Silver award for his article “Agentic AI: Decisive, operational AI arrives in business.”

More from this author