While artificial intelligence promises efficiency, and will likely determine which company wins market leadership, the technology also has an ugly side. Human hands can transfer prejudice onto the algorithms they create.
But AI products don't become bias-infused at random, analysts and executives say. Rather, they progressively inherit the views of their creators in a series of steps. Without necessary checks and balances in place, they can end up having a detrimental effect on their final users.
Bias creeps into AI at multiple points in the process:
- Data is subjective from the outset: Artificial intelligence feeds off data, which by nature carries the intent of those collecting it.
- Algorithms bring along bias: Training algorithms means engineers must make decisions about the data they've collected and how to use it, decisions that can carry personal bias.
- Then, a lack of checks and balances: Once an initial product is up and running, only formal oversight mechanisms and determined leadership can ensure AI is fair and delivers on its intent.
Overall awareness of the intersection of ethics and AI is on the rise, said Frank Buytendijk, distinguished VP analyst at Gartner in an interview with CIO Dive.
"Does that mean we’ve figured out how to do this?," he said. "Absolutely not."
Explainable, defensible AI
The connection between human bias and software doesn't exist solely in artificial intelligence. Rather, it is inherent to virtually every process in tech.
"You and I, building on the same objective, are methodologically different," said Madhu Bhattacharyya, managing director of enterprise data and analytics at consulting firm Protiviti in an interview with CIO Dive.
The way people understand the scope of a project, along with its potential outcomes and what the final output should be, will likely vary, she said.
Diversity in the teams that assemble and assess algorithms can provide value. But ultimately, Bhattacharyya said, companies need to have strong leadership and a data-driven culture in place to ensure ethical AI use spreads to the whole company.
Transparency throughout the process can help. The concept of explainable AI, for example, eschews the idea of a "black box" that arrives at decisions without a clear set of criteria.
Instead, explainable AI is transparent enough a human expert can identify and convey the way an algorithm arrives at one conclusion or another. The European Union's General Data Protection Resolution (GDPR) includes provisions to let consumers ask how AI tools make decisions about them, and offers them the chance to obtain human intervention into the process if they wish.
Buytendijk said industry has turned to the concept of "swarm AI" in the hopes of weaving biased algorithms out of production. Essentially, this strategy uses numerous separate algorithms to arrive at a single consensual conclusion.
The appeal of the swarm AI approach lies in the fact that it uses diversity to solve problems, Buytendijk said.
But presence of bias alone isn't harmful, according to Buytendijk.
"Bias is not always bad," Buytendjik said. "If you're using AI to fuel a conversational agent, and you don't want it to pick up bad words, then you're putting in bias to prevent it from learning those bad words."
To prevent algorithms from carrying harmful biases — like racial prejudice — Buytendijk said decision-makers need to carefully assess how diverse their data collection is, and what values it reflects.
"While you're in your process of setting up the learning of algorithms, and before you go live, is when you need to think through a reasonable extent how diverse your data is and what issues it may cause," he said.
Building an AI application starts with understanding the objectives, designing a hypothesis and identifying the necessary data to build the right algorithm. Then, in the final step in the process, there's a great opportunity to identify bias.
That last step, one of oversight, might be the most important to removing harmful biases. A thorough validation process can identify when a model can work with all kinds of data sets in a fair way.
"One team builds and another validates, but they use different teams to eradicate the human bias," Bhattacharyya said.