Dive Brief:
- Enterprise decision-makers say AI risks are packing a punch as adoption grows and tool rollouts continue, according to an EY survey of nearly 1,000 C-suite leaders published last week.
- More than 3 in 5 organizations have suffered AI risk-related losses of at least $1 million, and nearly all — 99% — reported some level of financial impact. Put together, EY estimates survey takers have racked up $4.3 billion in total losses.
- Governance and responsible AI practices offer enterprises a buffer against risks. Enterprises with a defined set of responsible AI principles experience 30% fewer risks compared with their less prepared peers. Most companies have strengthened governance, with leaders adopting an average of seven mitigation actions, such as sharing standards with employees, adopting metrics to measure adherence and establishing safeguards.
Dive Insight:
Enterprises are experiencing their fair share of AI-related growing pains while trying to manage new risks and continue the quest for ROI.
“Most organizations have seen rapid adoption of AI solutions,” Joe Depa, global chief innovation officer at EY, told CIO Dive. “What they’re finding now is that the question becomes, ‘Where’s my value that I’m achieving with AI?’”
Enterprises are often chasing productivity boosts, cost savings or revenue growth as part of their AI initiatives, Depa said. In EY’s survey, many organizations reported achieving their goals to some degree, but others said they still struggled to capture returns.
“Some have been disillusioned by not seeing the amount of ROI that they expected at this point in their journey, especially after investing significant dollars, resources and time in AI,” Depa said.
Analysts and industry experts have emphasized the connection between sharpened governance and innovation. EY’s survey results take that advice one step further: AI guardrails are proving to be useful in some of the most challenging areas to achieve ROI, such as revenue growth and employee satisfaction.
While enterprises have been quick to jump on generative AI over the past few years, developing safeguards to govern the technology has moved at a slower pace, Depa said.
“It requires a different governance process, and it’s something that we’ve worked on at EY as well, redoing and reoptimizing our own governance,” Depa said.
EY leans on three governance protocols, including a fast-track path, a second tier and a high-governance process. The fast-track path is for ideas with a clear business case, ROI and adherence to security, ethics and compliance protocols. These use cases are reviewed on a quarterly basis to ensure they’re going as expected. Riskier use cases are separated in the latter protocols, often requiring closer monitoring and additional safeguards.
Enterprises don’t want to end up on the front page or before a judge as part of the fallout from AI initiatives going awry. With the risks of these initiatives coming into view, CIOs are prioritizing risk mitigation with nearly all planning to increase governance budgets, according to a OneTrust survey.
Depa recommended that CIOs who want to accelerate innovation should begin by providing clarity for teams and implementing guardrails that tech professionals can work within.
“As leaders in AI, we have to challenge the notion that it’s just the traditional AI governance model,” Depa said. “It’s really about creating risk profiles so that you can put the right guardrails in place, so that organizations can innovate responsibly around it.”