The tech industry wants to win enterprise customers over with visions of a workforce that includes autonomous AI systems. Agentic tools, vendors say, can make decisions and handle complex tasks with minimal human intervention. 

But nearly a year into the agent push, most businesses are still trying to create appropriate procedures to enable enterprise adoption. 

CIOs are decoding vast and quickly moving marketing speak, contending with gaps in security, change management and governance, while operating in a hazy landscape marked by the technology’s immaturity. Even as the proportion of businesses exploring AI agents increases, meaningful adoption is limited. 

Around 65% of organizations said they were piloting the technology in April, bumping up from 37% the previous quarter, according to the latest KPMG AI Quarterly Pulse Survey. Widescale deployment, however, remained at 11%.

Enterprise interest and enthusiasm around AI agents remain high despite the hurdles, and technology leaders are working to sift through the noise and erect guardrails where necessary to speed up time to value. 

PepsiCo is pursuing agentic AI with standards established by the latest iteration of its AI governance framework and policy, released last quarter. The beverage giant first put a framework in place around four years ago. 

It’s still early days for AI agents at the company, according to Athina Kanioura, EVP and chief strategy and transformation officer at PepsiCo

“There isn’t a government policy or framework out there — it’s only generic guidelines. We, as an industry and individual companies, are trying to develop first those policies and guidelines, and then execution will follow,” Kanioura told CIO Dive. 

In the absence of AI agent standards, technology leaders must set a pace of adoption that makes sense given the broad changes expected ahead. 

By the numbers:
 
+40%
The percentage of enterprise AI projects that will be scrapped by the end of 2027, Gartner predicts.
 
130
The number of agentic AI vendors that are “real” compared with the thousands crowding the market and engaging in agent washing.
 
1 in 3
The ratio of enterprises taking a wait-and-see approach to agentic investments.

Governance gaps

Agentic AI presents new challenges for governance frameworks. 

PepsiCo is exploring how AI agents can improve upskilling and call center management, and it’s leaning on several technology partners to make inroads on deployment and governance. 

Kanioura said the consumer packaged goods company maintains governance around agents in a few key ways. The company relies on internal and vendor-provided guardrails, employees undergo AI training and the leadership team reviews its strategy with the board twice a year. 

“Part of the trickier element with agents is it's not a bot that does repetitive tasks and that’s it,” Kanioura said. The more intelligent and more autonomous the agent, the higher the risk, she added, especially with nondeterministic systems. Responsible AI, traceability and bias mitigation have always been a part of the company’s strategy, but agents have underscored their necessity.

Most businesses aren’t willing to move agentic solutions into production without a game plan to keep the business safe.

Examples of prudent enterprise approaches

Expedia Group: The Vrbo and Hotels.com parent company was quick to partner with OpenAI in 2022, but executives are taking a more cautious approach to AI agents amid vendor-led hype.

Ulta Beauty: The retailer sees agentic AI adoption as a way to enhance employee and customer experiences, but it is not ready for widescale implementation yet as leaders focus on establishing the right foundation.

“Traditionally, governance frameworks are rooted in a concept of system interactions that are largely deterministic with mostly predictable inputs and outputs,” Sara Duffer, VP of compliance and assurance at AWS, said in an email to CIO Dive. Policies and controls are created to address what’s known, Duffer added, and effectiveness is determined by point-in-time evaluation or looking back over a period.

“This needs to shift toward dynamic monitoring and evaluation,” Duffer said. Defining guardrails based on the interaction will be critical moving forward, but there’s also an element of learning by doing. 

“We need to apply real-world experience to gauge how these systems interact with resources, make decisions and evolve behaviors,” Duffer said. 

Despite consensus that AI agents require a governance playbook, enterprises are still determining what revisions are needed. 

Historically, business leaders have turned to organizations like the National Institute of Standards and Technology for guidance to reduce risks. CIOs saw NIST’s generative AI risk mitigation guides as a jumping-off point for contextualizing and mitigating risks. 

NIST has yet to release agentic AI risk management tips, but the standards authority is beginning to address the challenge. Earlier this month, NIST included single agent and multiagent use cases as potential subjects for its forthcoming series of Control Overlays for Securing AI Systems. For now, enterprises are going at it alone. 

Screenshot of Sara Duffer, VP of compliance and assurance at AWS, discusses key elements of an organization’s security preparedness with Director of AWS Enterprise Strategy Clarke Rodgers on June 24, 2024.
Sara Duffer, VP of compliance and assurance at AWS, discusses key elements of an organization’s security preparedness on June 24, 2024.
Screenshot: Amazon Web Services/Youtube
 

Security struggles, technical immaturity

Securing agents is another aspect of implementation plans that most enterprises are stuck on. CIOs and their C-suite peers are already aware of the dangers posed by unruly AI systems and are even more concerned about tools with higher levels of autonomy. 

“There’s a lot of risk that companies will build [agents] not fully thinking through all of this complexity and create security issues as a result,” said Matt Kropp, managing director and senior partner at Boston Consulting Group.

Gartner predicts AI agents will lead to weakened cybersecurity postures by reducing the time it takes to exploit authentication and communication channels. Data privacy and adding other controls are also top of mind.

Kropp said enterprises will need to consider building or adding identity access controls to mitigate some of the most pressing challenges. 

“Does the agent have an identity, like an employee ID, and how do you decide what files that agent has access to?” Kropp said. “There will be technical solutions for these things, but they're not very worked out right now.”

A person dressed in a Lowe's vest poses for a portrait.
Chandhu Nair, SVP of AI, data and innovation at Lowe’s.
Permission granted by Lowe's
 

Technical immaturity presents roadblocks to enterprise adoption. Vendors are still working through interoperability challenges and adding options for scalable governance controls

“Knowing that the technology to scale this and the protocols needed to scale are very early, we are taking a very cautious approach,” said Chandhu Nair, SVP of AI, data and innovation at Lowe’s

Especially as vendors hawk products, enterprises should remain wary of hype-filled claims. To overcome the noise, CIOs are relying on their own trusted networks. 

“It is important that you have an ecosystem of partners: some from the industry, some from the tech landscape and some from the broader ecosystem,” PepsiCo’s Kanioura said. “Sharing best practices and also pushing the tech developing community to continue developing solutions for users — and not just for the sake of the tech community — is super critical.”

Agentic systems, generally, should be treated as high-risk workloads, according to Cristian Rodriguez, field CTO, Americas, at CrowdStrike. 

“Enterprises need to govern agentic AI on two fronts: securing it from adversaries and controlling what it’s allowed to do,” Rodriguez said in an email. “That means enforcing identity, access and intent in real time, and securing the full AI lifecycle — from training to deployment to inference.”

If an AI tool is trusted to take an action, it needs the same level of protection as any human or machine in the environment, according to Rodriguez, such as API auditing, real-time data access analysis and identity and access management policy reviews. 

Human in the change management loop

An employee's ability to engage with the technology in a meaningful way is vital to adoption. CIOs can rely on change management practices to introduce and embed AI agents into employee workflows.

In general, workers have raised concerns about a lack of human oversight, unclear accountability when mistakes occur and introducing bias or unfair treatment via AI agents. 

Training and upskilling have become a vital lever as strategies take shape. Kathy Kay, EVP and CIO at Principal Financial Group, uses employee training to facilitate open communication around AI and address employee concerns. 

Kathy Kay, EVP and CIO at Principal Financial Group.
Kathy Kay, EVP and CIO at Principal Financial Group.
Permission granted by Principal Financial Group
 

“We want to be able to put our employees on higher-value work … so we talk about trust and about how to feel more comfortable with AI, what to look for when something goes wrong and how to use it productively,” Kay told CIO Dive. “We talk about our enterprise risk AI framework so our employees get an understanding of how we’re trying to govern the models responsibly and ethically.”

Open dialogue can assuage employee concerns while also promoting effective adoption techniques. 

“There’s a lot of things we share that we hope will help them feel more comfortable,” Kay said. “It’s an ongoing thing that we try and talk a lot about, and as we get feedback from our employees, we try to incorporate that as well.”

Enterprises are still trying to understand how AI agents will affect job roles and functions. CIOs are considering what it will mean for workers to manage AI agents, for example. This comes as employee pushback and trepidation around AI grows.  

“There’s a leadership component to all of this where business leaders need to be very articulate and affirmative about the role of the human, not just today, but how it persists into the future so that people do feel like they have a value proposition at the end of this AI journey,” PwC Chief AI Officer Dan Priest told CIO Dive. 

Gauging readiness

In addition to managing deployment and adoption, CIOs are feeling the pressure to keep up with rapidly changing technology and knowing that fast-moving organizations are likely to make missteps. 

Gartner predicts that at least 40% of agentic AI projects will fail within the next two years as businesses contend with rising costs, unclear business value and risks. 

The agents of today will not be the agents of next year, according to analysts. Broadly, AI agent performance is low and mistakes are common across a majority of tasks.

Enterprises are frequently asking vendors about how capabilities and governance are improving, according to Dataiku’s Head of AI Strategy Kurt Muehmel. Customers want to know how AI providers plan to ensure agents always take the correct path to find an answer.

The uncertainty and technology’s nascency are pushing organizations to take precautions. Lowe’s, for example, is leaning on internal-facing experimentation to get its feet wet, but the company is also expecting maturation to happen fairly quickly.

“Everything happens in lightspeed with AI,” Nair said. 

Some early adopters are finding success. A PwC survey found early adopters often credit AI agents with productivity boosts and cost savings. More than one-quarter of respondents are planning for budget increases of at least 26% in the next 12 months with agentic capabilities in mind.

A person dressed in business attire smiles for a portrait.
Joe Depa, global chief innovation officer at EY.
Permission granted by EY
 

Most organizations are still finding the right balance between the potential competitive advantages associated with early adoption and the risks that accompany an immature technology. For CIOs, the fast pace of change and lack of maturity also mean writing a governance playbook without having all the answers. 

Businesses must remain vigilant amid agentic adoption, according to Joe Depa, global chief innovation officer at EY.

“No matter how interesting or exciting agentic AI is … we need to make sure that we certify the agents, we onboard them, we ensure that they are doing what they say they’re going to do, that they’re acting responsibly and using the data in the right ways,” Depa said.