Fintech needs to keep AI and ML risks in mind, board warns
- Artificial intelligence (AI) and machine learning (ML) have made a home in the financial services industry, from credit assessment to capital and trade optimization. In light of its growing hold, the Financial Stability Board released its first report on the technology, warning against increasing dependence and its future implications in fintech.
- The FSB recognized several benefits of AI and ML, such as more efficient information processing, improved regulatory compliance and supervision and more connections between markets and institutions. As the financial industry brings in more tech applications, the board recommended thorough assessment and "training" to ensure privacy, adequate cybersecurity and a definite scope of functions.
- Risks of the technology use are third-party dependencies, which could give rise to prominent players that fall outside of regulations, unintended consequences due to "opaque models" of the tech and macro-level risks from a "lack of interpretability or auditability."
Move over Elon, someone else is stepping up to issue warnings about AI.
AI and machine learning fears are often rooted in Terminator-style takeovers, jobs lost to automation and sentient technology out to undermine humanity. The reality of the technologies threats, however, may play out most dramatically in the technical legalese of state and federal regulations.
A quick Google search shows that the United States does not have public policy relating to advanced technologies like AI and ML. Questions regarding the intellectual property rights of content produced by AI are currently playing out in discussion forums and professional meetings, but as of yet no significant AI or ML event or application has forced the country and legal system to reconcile the technology with the law.
Will advanced technologies be dealt with on an individual basis, finding application or violation within laws and regulations likely drafted and passed without knowledge of the technology's future use cases? Will the FTC, the reigning cybersecurity watchdog, have jurisdiction over the issues?
The commission has a self-described "broad mandate to protect consumers from fraud and deception in the marketplace," but without established norms and ethics specifically for AI and ML, the industry runs the risk of implementing applications only to find out later it went too far.
The promises and benefits of AI and ML are boundless and have proven themselves time and again. The technology will certainly never need to say, "Hasta la vista" anytime soon.
Follow Alex Hickey on Twitter