When SureStart CEO Taniya Mishra was earning her Ph.D., back in the early 2000s, developing voice recognition technology meant working with data sets that only reflected certain kinds of voices.
The data sets "were usually what we would call newsreader speech," featuring mostly white voices who spoke in a "polished and almost unnatural way," said Mishra, speaking Tuesday at CES 2021.
Lack of diversity in data can lead to the difficulties some consumer products have in understanding the voices of a diverse range of users. No matter how sophisticated the algorithm, incomplete data sets won't allow AI systems to accurately represent reality, according to Mishra.
"Our focus really needs to be on the data," Mishra said.
To develop AI systems that are as free from bias as possible, leaders need to take intentional action to infuse diversity throughout the process: from diversifying the data that shapes algorithmic decisions to expanding the racial, gender and makeup of the technologists in charge.
Within technology, the AI industry is especially homogenous. Just 20% of AI professors and 18% of authors at top AI conferences are women, a 2019 report from New York University's AI Now Institute found.
For Google, one way to be intentional about AI's bias was to directly seek input from underrepresented groups. As the company developed its Assistant product, it sought input from a diverse set of users in order to reduce the chances of its AI producing alienating or harmfully biased comments.
"We started to do what we call adversarial testing," said Annie Jean-Baptiste, head of product inclusion at Google, speaking on the panel. Essentially, the process involves trying to break a product before it launches "and do that intentionally, with the groups that have been underrepresented."
However, the company's diversity and inclusion efforts landed in hot water at the end of last year, after the departure of Timnit Gebru, a former Google staff research scientist and co-lead of Google's Ethical Artificial Intelligence (AI) team.
Gebru said she was fired by Google after sending colleagues an email "expressing frustration over gender diversity within Google’s AI unit," Reuters reports. Google instead contends Gebru resigned. Thousands of Google employees, researchers and technologists signed a letter in support of Gebru, and demanded Google Research to "strengthen its commitment to research integrity."
Bias free, by default
If the technology world has embraced the concepts of privacy-by-design and security-by-design en route to improving its products, the next step should be making a lack of bias the default as well, according to Kimberly Sterling, senior director, health economics and outcomes research at ResMed.
"We have to think about eliminating bias by design," said Sterling, speaking on the same panel.
Companies whose AI teams share identical backgrounds, age or education are probably not being intentional about diversification, according to Sterling.
"I think it's also really important to know that this is a journey," said Jean-Baptiste. Data sets are in a constant state of ebb and flow in an organization, which means that in order for diversity, equity and inclusion practices to be successful, they cannot be treated as a one-off.
Industry must also reassess how it generates a pipeline of technologists who can create AI products, and how it can level the playing field to hire for broader skill sets, Mishra said.
AI leaders are already pressed to find enough workers to fill open roles, with talent shortages sitting atop the list of hurdles tech leaders face.
"How can we measure people who are differently-abled or people who may not have had the same opportunities, and the same access as their peers?," Mishra said. "We need to be really intentional about opening those doors wider and bringing in a greater diversity of people and looking at them holistically."