Credit providers have grappled with fraudsters since long before mobile banking. In a modern landscape, financial services businesses dedicate ample resources to thwart fraud attempts.
As fraudulent actors get smarter, machine learning can help companies stay one step ahead. But first, organizations need access to those tools.
Capital One is democratizing access to ML tools, encouraging workers to contribute to a common shared ecosystem to provide practitioners with easy access to ML and spur innovation. In the process, Capital One found opportunities for cross-unit collaboration and improved how the company detects fraud.
"The future is here," said Zach Hanif, VP, head of enterprise machine learning models and platforms at Capital One. "But, historically, it hasn't always been distributed evenly."
ML tools keep humans focused on the tasks that require their attention, prioritizing resources through technology. AI capabilities are finding a role in financial services in particular.
Four in 5 companies in the sector have up to five AI use cases at work in their organization, according to an NVIDIA report published in February. Nearly one-quarter are using AI to help detect fraud.
Hanif's team worked alongside the card fraud division to build homegrown and open source ML algorithms and technologies. With ML tools, the company can quickly determine whether a transaction is benign or if it needs further investigation due to potential fraud.
"We were able to get these teams on the same stack and focused on collaboration, which made sure that we were able to bring down some silos," said Hanif. "We were able to prioritize the development of reusable components so when one team would build a component of their pipeline, other teams were able to immediately begin leveraging it and save themselves the time of that initial development."
Machine learning gives the company a way to quickly determine whether something needs to be investigated, according to Hanif.
Technical and human challenges
Picking a technology and spreading it throughout the organization isn't a turnkey task.
There are several barriers to easing access to ML throughout any organization, according to Arun Chandrasekaran, distinguished VP analyst at Gartner.
The top barriers are security and privacy concerns and the black-box nature of AI systems, as well as the absence of internal AI know-how, AI governance tools and self-service AI and data platforms, Chandrasekaran told CIO Dive in an email.
Despite the advancement of AI tools in the enterprise, activities associated with data and analytics — including preparation, transformation, pattern identification, model development and sharing insights with others — are still done manually at many organizations.
"Demands for more data-driven and analytics-enabled decision making, and the friction and technical hurdles of this workflow, limit widespread user adoption and achieving better business outcomes," Chandrasekaran said.
But changing how companies operate is a human problem as much as it is a technical one. Cultural factors can determine whether or not a company succeeds at democratizing the use of a technology tool such as ML.
"To be able to drive change across a large organization, you're trying to make a cultural alteration," said Hanif.
Leaders need to encourage employees to imagine what they can do with specific tools, he said. With that mindset, fear of change falls away and employees begin to think about how a new technology can be contextualized within the existing problem space.
"Standardizing a platform allows everyone to have a common operating environment and runbook," said Hanif. "That way they can start and engage in that process in a standard, well-understood way. That makes so many different things inside of the organization go smoother, go faster, and reduce the overall risk."