Asked one major industry analyst: ‘Who is going to be motivated to adopt if they know the intent is to replace them?’ Credit: fizkes / Shutterstock 娇色导航 Nearly one in three (31%) company employees say they are “sabotaging their company’s generative AI strategy,” according to a — a number that jumps to 41% for millennial and Gen Z employees. The survey also found that “one out of ten workers say they’re tampering with performance metrics to make it appear AI is underperforming, intentionally generating low-quality outputs, refusing to use generative AI tools or outputs, or refusing to take generative AI training.” Other activities lumped in as sabotage include entering company information into non-approved gen AI tools (27%), using non-approved gen AI tools (20%), and knowing of an AI security leak without reporting it (16%). These are the kinds of things industry analysts and observers pushed back against as being “sabotage,” given that in many cases such breaches of conduct involve attempts to improve productivity or make it easier to get work done. Undermining the overall AI strategy might be a more precise description, for some. “If they are intentionally misleading their employer about the results of using generative AI for a particular process, or dumping their company’s sensitive data into a third-party consumer tool, that’s definitely sabotage,” says , principal research director at Info-Tech Research Group. But “if they are not using generative AI outputs because of legitimate quality concerns, that may well be part of doing their job. Or if they are using third-party tools but not sharing confidential company details, then it’s also not malicious.” Still Jackson agrees that actual sabotage is going on, much of it for the obvious reason: boards and senior exes publicly touting AI as a way to reduce the workforce. “Who is going to be motivated to adopt if they know the intent is to replace them?” Jackson asks. Because “AI can automate new aspects of knowledge work that have required human creativity and intelligence, there might be resistance if people feel like AI is being used to replace people in areas where we enjoy working and where we value a human touch.” Jackson advises listening to employee feedback on where AI adds value rather than taking a “top-down approach that would risk alienating workers who feel they are training technology that will put them out of a job.” Jackson also believes that some CEOs aren’t helping to relieve the overall tension around AI and job losses by touting workforce reductions due to AI in situations where it was not true. “Executives sometimes look to spin layoffs — it’s a rationalization — as, ‘We are not doing this because the company is in trouble. No, we are doing [the layoffs] because AI is making us so efficient that we don’t need as many people anymore.’ Instead of admitting that they over-hired, they prefer to say, ‘We are using AI as mature and tech-savvy leaders,’” he says. A data analyst overseeing AI integration at an $80 billion retail chain — who asked that his name and employer not be stated — said he has directly seen acts of AI pushback. Although “outright sabotage is rare, I’ve observed more subtle forms of pushback, such as teams underutilizing AI features, reverting to manual processes, or selectively ignoring AI-generated recommendations without clear justification. In some cases, it’s rooted in fear: Employees worry that increased automation will reduce their role or make their expertise less valued,” the data analyst says. But “what appears to be resistance is actually a cry for inclusion in the change process. People want to understand how AI supports their work, not just that it’s being imposed on them.” One HR specialist said she also sees a lot of AI sabotage happening, but believes the motivation for it is not unreasonable. “Employees are resisting, delaying, and, in some cases, actively undermining gen AI rollouts. But labeling this as sabotage oversimplifies what’s often happening,” says , CEO of Career Nomad. “It’s not always malicious. It’s often protective. When employees believe gen AI adoption threatens their jobs, especially in environments with frequent layoffs or weak psychological safety, pushback becomes a survival tactic.” “Sabotage can become real if fears are ignored,” Lindo adds. “If leadership dismisses employee concerns, doesn’t connect AI to clear upskilling pathways, and enforces top-down rollouts, employees may deliberately slow adoption or feed poor-quality inputs to protect themselves.” Combatting sabotage is tricky Countering AI resistance requires better training and communication. But training and communication alone won’t eliminate it entirely, especially if executives are candid about plans for layoffs if AI strategies succeed. And with AI sabotage likely impossible to fully eradicate, companies can be exposed to significant risks — and liabilities. , a technology attorney with the law firm Gregor Wynne Arney, says companies might be exposed to legal penalties if their employees engage in deliberate sabotage. Those companies should remind employees that if they engage in sabotage they too can face personal legal peril. “If the company is found to have negligently supervised or enabled the employee’s sabotage and that sabotage violates other laws, such as violations of data privacy or HIPAA or confidentiality or consent to one’s data being used in training sets, and so on,” it can be liable for those violations, Powell said. “Employees might also generate information that binds the company to contracts that it does not really want, or that constitutes defamation of a third-party. Or an employee might infringe third parties’ copyrights or trademarks, or disclose trade secrets of the company or one of its partners, any of which could expose the company to liability.” Powell also points out the risk to employees themselves: “The potential for liability is also a key part of the education companies should be giving their employees, letting them know that sabotage isn’t just hurting the company, but could expose the employee to civil and criminal penalties, as well as jail time.” Regardless of the fallout, fractional CMO sees AI sabotage efforts as nothing new. “This is luddite history revisited. In 1811, the Luddites smashed textile machines to keep their jobs. Today, it’s Slack sabotage and whispered prompt jailbreaking, etc. Human nature hasn’t changed, but the tools have,” Nyman says. “If your company tells people they’re your greatest asset and then replaces them with an LLM, well, don’t be shocked when they pull the plug or feed the model garbage data. If the AI transformation rollout comes with a whiff of callous ‘adapt or die’ arrogance from the C-suite, there will be rebellion.” See also: 11 surefire ways to fail with AI IT leaders’ top 5 barriers to AI success SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe