Decisions made by AI models, in the financial sector especially, have the power to directly impact people's lives. One misstep can harm people's income, businesses or savings.
For an institution the size of Bank of America, which has 66 million consumer and small business clients, a 0.01% error in accuracy is still an error at large scale.
To ensure AI is implemented responsibly, Bank of America turns to independent testing and a stringent governance structure, said Catherine Bessant, chief operations and technology officer at Bank of America.
"We have a culture that says we have to be safe before we're fast," said Bessant, speaking Wednesday at Reuters Events' MOMENTUM Virtual Forum. "We're a financial institution, we deal every single day in people's assets, the most important thing that they work for every day."
AI tools play a role in the bank's operations throughout the organization, Bessant said:
- For the consumer business, AI guides customers toward specific financial goals.
- For the institutional business, AI models help predict variables such as market movement, pricing or customer demand.
- For the technology organization, AI provides insights into systems operations, helping leaders understand when more or less capacity is needed.
The use of AI has expanded its reach in the enterprise realm; Gartner predicts three-quarters of enterprises will operationalize AI in the next four years. Consumer-facing AI applications will have a more positive impact on customers and end users when compared to back office functions, according to a KPMG survey of financial services experts.
But AI use in financial services has faced pushback related to implicit bias built into its models, specifically on applications designed to guide lending decisions. Without necessary checks and balances in the design process, AI applications can amplify the biases of its creators and eventually impact the end user.
"The most fundamental thing wrong with AI is that we make it a mystery," Bessant said. Building more transparency into models — a trait of what industry calls "explainable AI" — will shape future development in the field.
Bank of America puts its AI models through "every statistical testing strategy known to man," in a process that is independent of the developer teams responsible for building the models, Bessant said. The testing process checks the safety and efficacy of underlying models and the data that helped train it.
The deployment of AI models is guided by "a very rigorous change management process," Bessant said, which ensures documentation, verifies testing and won't allow anything to go live until the company is confident it will work.
"It may slow us down a little bit," said Bessant. "But that's far better than the cost of an error at scale."