Dive Brief:
- Business leaders see responsible AI as a lever to mitigate deployment risks, prevent further fallout and drive business growth, according to an Infosys report published Thursday. The company surveyed 1,500 senior executives.
- Almost all respondents – 95% — experienced at least one type of “problematic incident” from their use of enterprise AI, primarily resulting in direct financial loss to the business. The average company reported financial losses of about $800,000 over two years, Infosys found.
- More than three-quarters of senior business leaders view responsible AI practices as leading to positive business outcomes. A small minority — 7% — feel that responsible AI practices hold back growth. On average, business leaders believe they are underinvesting in responsible practices by around 30%.
Dive Insight:
Enterprises rushed into AI deployment plans while the hype haze was thick. Now that the risks are clearer, business leaders are looking for ways to remediate.
The definition of responsible AI can vary from organization to organization but often centers on fairness, transparency, accountability, privacy, security and the reliability of systems. While beefing up AI governance provides CIOs with a path forward, not all enterprises have embarked on that route.
“A lot of organizations have not yet set up a robust, responsible AI program,” Traci Gusher, AI and data leader at EY Americas, told CIO Dive. Some CIOs are still trying to decipher how to mitigate bias where possible, prevent model drift and protect applications from security threats, Gusher said.
The stakes are getting higher. At a time when business leaders across industries are exploring AI agents that can complete tasks without human intervention, a lack of sound governance amplifies potential risks.
“As a result, senior leaders are saying, ‘I don’t think we have the company policies in place to go big using agentic,’” Gusher said.
Forward-thinking organizations are already figuring out how to best protect the enterprise from AI agent-driven risks.
“Multiagent systems and agents collaborating with one another … it’s going to come with big governance challenges,” said Stijn Christiaens, co-founder and chief data citizen at Collibra, a data governance platform provider.
Standards authorities are beginning to address the challenge. The National Institute of Standards and Technology included single agent and multiagent use cases as potential subjects for its forthcoming series of Control Overlays for Securing AI Systems. On Thursday, NIST requested public feedback to inform upcoming guidance and created a Slack channel to collect sentiment for the development of the overlays.
“NIST right now is what most organizations are basing their AI programs and governance on,” Gusher said. “It’s the best example we have of what feels and looks right.”