Dive Brief:
- Enterprises are prioritizing AI-related risk mitigation efforts as adoption grows, but formal governance frameworks remain in development, according to a Collibra survey published Wednesday.
- Three in five businesses are now monitoring for bias, fairness and transparency, the survey of 300-plus technology decision-makers found. More than half are conducting regular AI risk assessments and audits, too. Most tech leaders are also confident in data strategies underpinning agentic AI.
- Even so, fewer than half of technology decision-makers have formalized governance policies and frameworks. Offering governance and compliance training to employees remains limited among businesses, with just 47% doing so.
Dive Insight:
Enterprises are enthusiastic about the potential of agentic AI, but meaningful implementation has yet to catch on.
“Adoption of AI agents is going to take some time,” Felix Van de Maele, CEO at Collibra, told CIO Dive. “These things always take longer than you expect or necessarily want, but it is starting.”
While businesses land on a wide range of readiness, there is some consensus on the technology’s impact. Nearly nine in 10 technology decision-makers agreed that agentic AI will drive returns on investment, according to the Collibra survey. Most businesses are also pursuing adoption to some degree, mostly in exploration phases.
If businesses haven’t already, enterprises will soon learn that hodgepodge approaches to governance aren’t sufficient when adopting AI agents, Van de Maele said.
“To truly monitor for bias, fairness, transparency, you only get there by establishing real governance policies and frameworks,” Van de Maele said. “Otherwise it becomes ad hoc, and that might be okay to start with, but then, at scale, it doesn’t work.”
CIOs who can help their business mitigate the risks of agentic AI efficiently contribute a critical piece to solving the enterprise ROI puzzle.
“If I release this thing into the wild, and it says mean, wrong, stupid things, that is a technical failure and a reputational failure,” Chris Mixter, VP analyst at Gartner, said during a presentation last week at the firm’s IT Symposium/Xpo in Orlando, Florida.
Businesses want to avoid the fallout. Enterprise spending tied to managing AI risks is already growing, with the average organization expecting a 24% jump next year, according to a OneTrust survey.
AI risk mitigation tactics range from the monitoring and auditing outlined in Collibra’s survey to other tactics, such as limiting agent access to sensitive data.
“There will always be tradeoffs between security and speed to market, but your job is to make sure those tradeoffs are explicit, that they are agreed upon and that we have documentation of why we decided not to do a thing, just in case,” Mixter said.
CIOs can also lean on safeguards that have proven effective with previous iterations of artificial intelligence, such as machine learning.
“Most of what we need to do to secure AI are things that we already know how to do,” Mixter said.