- Artificial intelligence (AI), while yielding profound business insights and efficiencies, threatens to erode privacy, competition, equal treatment and other underpinnings for trust in business, according to the Organization for Economic Cooperation and Development (OECD).
- "Growing complexity in AI models and difficulty — or in some cases impossibility — in explaining how these models produce certain outcomes, presents an important challenge for trust and accountability in AI applications," according to Mathilde Mesnard, acting director for the OECD's directorate for financial and enterprise affairs.
- AI algorithms may undermine market integrity and stability by causing unintended discrimination, herding behavior, concentration of markets by dominant players, cybersecurity weaknesses, vulnerabilities to privacy and other negative outcomes, the OECD said.
Businesses worldwide last year invested $67.9 billion in AI, more than five times their outlay in 2015, according to Statista.
AI algorithms benefit businesses and consumers across a full spectrum of industries, helping to cut costs, forecast sales and revenues, fight fraud, gauge credit risk and manage employees.
AI also translates languages, shortens travel times, connects like-minded people on social media, boosts returns for retail and institutional investors and diagnoses illnesses such as cancer.
"AI applications offer remarkable opportunities for businesses, investors, consumers and regulators," the OECD said. "AI can facilitate transactions, enhance market efficiency, reinforce financial stability, promote greater financial inclusion and improve customer experience."
Yet advances in AI may outpace efforts to curb its potential hazards, according to the OECD.
"Increasing complexity of AI-powered applications in the financial sector, as well as the functions supported by AI technologies, pose risks to fairness, transparency and the stability of financial markets that current regulatory frameworks may not adequately address," the OECD said.
Business use of AI has prompted widespread concern in several countries worldwide. While 53% of people believe AI is good for society, 33% say it is not, according to a Pew Research Center survey in late 2019 and early 2020. In the U.S., the split in opinion is roughly even.
The OECD adopted AI Principles in 2019 aimed at sustaining trust by ensuring "explainability, transparency, accountability and robust data management."
Businesses should inform customers when AI is in use and be able to explain how AI algorithms reach decisions, according to OECD. They should also be accountable to AI outcomes and meet high standards for data quality and governance.
Data quality and governance are "critical as the inappropriate use of data in AI-powered applications and the use of inadequate data can undermine trust in AI outcomes," according to OECD.
"Failing to foster these key qualities in AI systems could lead to the introduction of biases generating discriminatory and unfair results, market convergence and herding behaviour or the concentration of markets by dominant players, among other outcomes, which can all undermine market integrity and stability," OECD said.
AI experts and advocates worldwide have grown concerned about the long-term impact of AI, Pew found in a survey.
More than two-thirds of 602 respondents predicted that by 2030 most AI systems will not "employ ethical principles focused primarily on the public good," according to the survey by Pew and Elon University.
The AI experts and advocates "have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will," Pew said. Its survey was non-scientific, "based on a non-random sample."