The AI craze of 2023 featured a vendor-led, hype-filled cycle of ambitious technology announcements. This year, regulators will look more closely at those claims.
The U.S. Securities and Exchange Commission Chair Gary Gensler cautioned companies about exaggerating or misstating how they use AI in their businesses at an event last month, echoing other enforcement officials’ warnings from earlier last year.
Keeping AI claims in check is expected to be a continued focus for regulatory agencies this year, according to analysts, and it’s something CIOs will have to keep an eye on as well.
AI washing describes marketing efforts in which vendors make false or misleading claims about a solution's AI capabilities.
“Whatever it can or can’t do, AI is important, and so are the claims you make about it,” said Michael Atleson, attorney at The Federal Trade Commission division of advertising practices, in a blog post last February. “You don’t need a machine to predict what the FTC might do when those claims are unsupported.”
The potential risks of marketing gimmicks require technology leaders to press vendors on the specifics of their tech, ensuring generative AI tools work in the way they’re intended. Vendors can’t hide behind the black-box nature of the technology. Instead, they should provide adequate proof of their claims.
“When we’re talking about a company, like an advisor or broker-dealer or regulated securities intermediary of some type, the regulators are really unsympathetic to the idea that there’s a black box,” said Amy Caiazza, partner, fintech and financial services at Wilson Sonsini Goodrich & Rosati.
But vendors aren’t alone in bearing the responsibility. Businesses are also accountable for the solutions they deploy.
“If a business is really dependent on a third party company that is AI washing, it could have potentially ancillary responsibility for that, because it’s the company’s duty to understand what that risk might be and to disclose it in filings,” said Caiazza.
The FTC banned Rite Aid from using AI-based facial recognition technology for surveillance for five years as part of a court case settlement announced in December. The agency’s lengthy complaint raised Rite Aid’s obligation to ensure third-party service providers had appropriate safeguards, noting the company failed to obtain adequate documentation of vendor claims.
The AI solution also didn’t work as intended. Rite Aid collected tens of thousands of images of individuals to create a database of people the company believed engaged or attempted to engage in criminal activity at store locations, according to the FTC. The system generated thousands of false-positive matches that disproportionately impacted Black and Asian communities.
“If you say your AI is bias-free and you don’t have substantiation for that statement, that could be a deceptive practice,” said Maneesha Mithal, partner, privacy and cybersecurity at Wilson Sonsini Goodrich & Rosati. “Companies run the risk of regulatory action and reputational harm in the marketplace.”
While Rite Aid addressed the allegations in a statement saying the tech was piloted in a limited number of stores and deployment was stopped three years ago, the settlement demonstrates the FTC’s desire to regulate unsubstantiated AI claims and subsequent impacts.
“Regulators follow trends in the marketplace, and because of the explosion of AI, regulators have turned their attention,” Mithal told CIO Dive. “I think there’s some feeling among U.S. regulators that they need to keep up.”
Increased regulatory interest comes as employee generative AI use at work grows and technology leaders feel the weight of turning hype into clear deliverables.
“While leaders may be under immense pressure to quickly implement AI solutions, they must prioritize the due diligence of ensuring these tools deliver what their business needs,” Juniper Networks SVP and CIO Sharon Mandell said in an email. “Given how AI technologies work, and how dependent they are on context and data, one should seriously consider proofs of concept with clearly articulated outcomes in mind, preferably those that are measurable and quantifiable.”
Enterprise armor for AI washing
Enterprise leaders hoping to protect their business against AI washing can look to their procurement practices
“AI is one of those things that when it’s done well can be so transformational and when done poorly can be very damaging,” Bret Greenstein, data and analytics partner and generative AI leader at PwC said. “Because of that, IT leaders need a fundamental education when they’re making these decisions, so they’re not just buying it because of the marketing and the hype.”
Asking the right questions, consistent monitoring of solutions and fact-checking AI claims are only possible with the right expertise.
If CIOs rely on procurement teams, leaders need to ensure that teams have the necessary skills. The process to vet AI solutions varies from other technologies, such as SaaS applications, and those in charge of securing a solution have to ask more questions about data privacy, copyright liability and model limitations.
“If you don’t live and breathe in the AI space, you could miss those things,” Greenstein said. “We always tell people: 'This is one of those times to never go alone.' Always go with someone who’s been there, people who’ve done it before.”
Enterprises must be aware of red flags during vendor discussions, from avoiding questions to failing to provide adequate proof of claims. However, other caution signs are more veiled.
“Consider their history with technology and AI,” Maxime Vermeir, senior director of AI strategy at ABBYY, said in an email.
Vermeir said companies must look closely at how novel a provider's interest in AI is, and whether it has conveniently aligned with the rapid rise of AI popularity over the last 18 months.
“Also examine the logic behind their use of AI," he said. "An unclear use case is a strong indication that the company is just as confused about how AI fits within their service as you are.”
Due diligence is not a one-step or one-time process. The AI market is fresh with new solutions emerging daily, requiring technology leaders to keep a pulse on what’s changing. But with new versions of models, improvement is not always guaranteed. CIOs have to watch the evolving market, while also keeping an eye on solutions they’ve already vetted.
“You can’t say something on January 1, and assume it’s still correct on June 1,” Caiazza said.
Greenstein recommends enterprises assess model behavior monthly, checking performance by use case.
With the list of things to keep track of expanding, tech leaders may find it beneficial to delegate some of the responsibilities to skilled employees to ensure nothing slips through the cracks.
“Eventually, it’ll be automated, but, for now, people have to watch and validate and revalidate that it’s performing where you want it to,” Greenstein said.