Half (52%) of companies with mature AI implementations say they check the fairness, bias and ethics of their AI platforms, according to the O’Reilly 2021 AI Adoption in the Enterprise report, released Monday.
Other risks to AI adoption ranked higher on the checklist, such as unexpected outcomes or predictions (71%) and model interpretability and transparency. The company surveyed 3,574 recipients of its data and AI newsletters, 3,099 of whom work with AI in some capacity.
Many companies and organizations haven't thought through the consequences AI products can carry, said Rachel Roumeliotis, VP of content strategy at O'Reilly. "It seems like this is akin to security, where companies don't care about it until something bad happens."
Despite AI's potential to do harm when deployed at scale — especially when making decisions regarding customer outcomes — the ethical, fairness and bias dimensions of AI don't rank atop the executive priority list.
One factor shaping executive's view on the ethical risks of AI is that models haven't yet hit scale across the enterprise.
Companies often overestimate their level of maturity when it comes to responsible AI implementation, according to data from BCG GAMMA, a research group within Boston Consulting Group. While 26% of companies say they've hit scale in their AI deployment, only 12% include a responsible AI program as part of their work.
Gartner expects AI implementation at scale will take place at three-quarters of companies in the next three years. But AI implementation won't hit maturity "until ethics, safety, privacy, and security are primary rather than secondary concerns," according to the O'Reilly report.
Talent shortages in the high-demand AI space can further hinder companies from reaching AI maturity. Jobs in the emerging technologies category represented nearly one-third of new tech job posts as the year began.
Lack of skilled people and difficulty in hiring was cited by 19% of respondents as the top bottleneck toward AI adoption.
To support AI maturity, teams can benefit from reviewing case studies in how other organizations have managed AI implementation. "I think it's really important, because you're seeing how people did things and their consequences," said Roumeliotis.
For some companies, the ethical dimension of AI implementation hasn't been fully thought through because they have yet to deploy at scale and reach full maturity, according to Roumeliotis.
Bias can seep into AI products at multiple points in the creation process from the data fueling decisions to the algorithmic training and final review stage.
"What's in the datasets and the algorithms are a reflection of the team," said Roumeliotis, which means companies gain an advantaged position by having "a more diverse team to start with."