娇色导航

Our Network

Contributing writer

4 thoughts on who should manage AI agents

Tip
Aug 11, 20257 mins
Human ResourcesIT GovernanceKPMG

As agents become more autonomous, it raises questions about which groups within the enterprise should manage them, and whether they should be treated as software or humans.

Rethinking digital transformation for the agentic AI era
Credit: Rob Schultz / Shutterstock

As AI agents proliferate and move from pilots to production, we need to turn our attention beyond AI agent builder platforms to AI orchestration and AI GRC platforms. And with greater autonomy, it raises questions about which groups within the enterprise, like IT and HR, should manage agents and how they should be treated.

AI agents are increasingly integrating deeper into enterprise processes, and there’s growing debate about how they may start eating away at the long-running SaaS model. Organizations are moving from standalone pilots of individual agents to real-world multi-agent deployments, implementing hundreds or even thousands of AI agents inside their enterprise processes.

[ Related: ]

According to the most recent , most organizations are past AI agent experimentation, 33% of which have deployed at least some agents, up threefold after two consecutive quarters at 11%. All this raises questions about how we oversee them as software, but also as human-like employees who work autonomously to achieve assigned goals.

For example, from an HR perspective, is it appropriate to think of agents as human-like workers and should HR, not IT, ultimately manage, hire, and fire them? How will agents deal with overlapping roles and responsibilities, and if measured and paid on performance and outcomes, how will they take credit? Will they be developed through prompt engineering by vendors to take more credit than they deserve? And what about unintended consequences if agents go rogue, like the recent example of an ?

From an IT perspective, how will orchestration evolve from simple to complex step coordination, and ultimately to autonomous orchestration, forming their own external partnerships and alliances? Many vendors like Microsoft are already making great strides by supporting multiple such as sequential, concurrent, group chat, handoff, and magnetic. We also have an annual trust index for AI and now , but do we need a trust or risk score for each individual agent as well? How is an agent deemed as qualified for a task and when do, and don’t, we need a human in the loop?

For CIOs and CAIOs looking to address these and many other questions to not just keep up, but get ahead of how to manage AI agents, here are four recommendations.

Explore where AI orchestration and AI governance platforms meet

Current solutions to manage AI agents reside across both AI native and traditional platforms. AI orchestration and AI governance platforms are increasingly coming together as organizations seek to deal with how to manage it technically, as well as ethically, legally, and operationally.

ServiceNow’s , for example, helps organizations manage any AI end to end, and provides role-based access for the CAIO, 娇色导航CTO, as well as risk and security leaders. This is a useful step to bring all relevant stakeholders together from across the enterprise, and give each role its unique reports and dashboards to provide insight into agent behavior.

With such a rapidly evolving field in the orchestration and governance space, it’s important for CIOs and CAIOs to pay close attention and continuously explore the platforms, how they’re converging, and what makes sense for their organization.

Expand the scope of your AI COE

If you have an AI center of excellence already focused on traditional AI, ML and gen AI, expand its scope to include agentic AI if you haven’t done so already.  

Also, if you have an in-house global business services (GBS) organization, this might be where to consider housing your COE since GBS often supports both HR and IT, as well as other functions such as finance, supply chain, and others.

The focus should be on orchestration of both humans and AI. According to Ian Barkin, co-CEO and co-founder of multi-agent consultancy magentIQ, the antidote is not more AI, it’s orchestration. “AI alone lacks the judgment, context, and governance awareness to operate safely at enterprise scale,” he says. “Human oversight isn’t optional, it’s essential, so the future of work won’t be AI-only, it’ll be a hybrid of AI and people where AI and human agents collaborate dynamically, with clear lines of accountability and escalation.”

Bring HR into all aspects of agentic workforce management

The KPMG Pulse Survey says nearly nine in 10 leaders think agents will require organizations to redefine performance metrics, and prompt them to upskill employees currently in roles that may be displaced.

“If you manage AI agents like you manage software development, you’re already behind,” says Kathleen Walch, director of AI engagement and community at PMI. “Forward-looking leaders treat AI agents as dynamic digital talent, meaning they’ll need onboarding, performance reviews, and ethical and trustworthy boundaries. HR must be involved in defining digital roles and responsibilities and centers of excellence need to guide agent deployment, usage, and experimentation. AI governance needs to be established as well to empower teams to experiment, learn from others, and scale AI agent deployments responsibly and strategically.”

Agents will necessitate redefined performance metrics for employees, but as humans and AI agents work more collaboratively, this will also necessitate a redefinition of AI agent performance. Ultimately, agents may well work for the business, and IT for IT-related tasks, just like human workers. They’ll also be managed by IT from a platform and infrastructure perspective, and by HR from a people and process perspective.

If you’re not doing so already, get HR plugged into all aspects of your AI agent orchestration and governance, including your strategy, COEs, and councils.

Make governance a race to the top, not the bottom

Potential issues related to lack of AI governance are stark. “Left unmanaged, AI agents will create chaos for IT, InfoSec, and data security teams, exposing companies to reputational, financial, and legal risks,” says Barkin. “Every unsanctioned agent deployment becomes a potential policy violation, and every ungoverned interaction poses a risk of AI behaving unpredictably, misaligned with corporate ethics or regulatory expectations. This isn’t a distant threat, it’s an operational minefield already materializing in enterprises pushing AI-first without AI-governed strategies.”

This shouldn’t be simply about compliance either. Organizations should go beyond complying with regulations, such as the EU AI Act, and look to help advance the industry, not just tick a box. “Success will be measured not by how many agents you deploy, but how safely and effectively they deliver outcomes, with compliance and control built in by design, Barkin adds.

The responsibility for AI governance, and AI safety, both now and in years to come, lies with both AI vendors, the enterprise, and end users. According to Marcus Murph, US head of technology consulting at KPMG, the foundations for agent-powered enterprises are being laid right now, and if leaders feel behind, they are. “The real risk isn’t moving too fast, it’s mistaking experimentation for transformation,” he says. “The winners won’t be the ones with the most pilots but the ones investing now in scalable data architectures, agent governance models, and workforce readiness. Because once agents are everywhere, it’s too late to retrofit trust, structure, or strategy.”

Spending extra time on AI orchestration and governance right now, and thinking about how AI agents will impact your technology platforms, COEs, and your approach to governance, will enable your organization to scale more rapidly, and advance your AI maturity in the months and years ahead.