Big tech is rolling out tools to counter AI bias. Is it enough?
- IBM introduced a software service on IBM Cloud to improve transparency around AI by automatically detecting bias and improving outcome explainability as AI decisions are being made; the SaaS solution is intended to work across popular enterprise frameworks including AzureML, AWS SageMaker, Tensorflow and IBM's Watson. IBM Research also open sourced a toolkit for AI biases and mitigation to foster collaboration in the issue area.
- One week earlier, Google also released an open-source tool to inspect machine learning models and detect data biases. The "What-If Tool" allows users to examine "what if" cases without having to write code to conduct the analysis, helping developers understand their models and outcomes. The tool offers dataset visualization, counterfactual comparisons and algorithmic fairness assessments.
- Other big tech companies are working on the bias in AI issue too. Microsoft is building a tool to detect algorithmic bias, and in May Facebook launched its bias tool, Fairness Flow, which assesses how the company's algorithms treat groups of people, reports MIT Technology Review.
Countering bias in artificial intelligence is difficult because it is often an unintended result of well-meaning practices. Tools to check AI during runtime and fix problems after the fact shouldn't take away from attention and efforts to stop these problems upfront.
There are three levels of failure for AI algorithms: integrity of data, integrity of algorithms and the decision-maker, Rob Enderle, founder and principal analyst at the Enderle Group told CIO Dive. Tools like the ones rolled out by IBM and Google focus more on data sets and algorithms, but the market understands that all three levels need to be assured.
These tools are just initial efforts to improve AI algorithms and will mature and refine in the coming years; getting them out to the market and developers was an important step for companies working in AI, Enderle said.
Solving AI biases is more than a technical problem: Diversity in teams is critical. Racial, ethnic and gender diversity are important components of a strong workforce, as is diversity of knowledge and experience. Experts with social science backgrounds may approach algorithms tackling consumer and human-centric problems differently than an engineer.
Follow Alex Hickey on Twitter