New York City Mayor Bill de Blasio will decide whether or not to sign a local bill on agency use of "automated decision systems," according to NYC public records. These systems are "computerized implementations of algorithms" using ML, AI or data processing that make or help with decision making. The bill calls for a task force to oversee and recommend how agencies use these algorithms and how to address discrimination or harm at the hands of these algorithms.
NYC is not the only entity putting in place new rules to enact social responsibility in technology. Twitter enacted rules Monday to curb "hateful and abusive content" on the social media platform, which included attention to content relating to violence or physical harm and related hateful content. Reddit's mobile app upgrade Monday improved moderator tools to expand desktop capabilities to the mobile platform.
- Google plans to have 10,000 workers reviewing content in 2018 because human workers can make contextualized decisions better than ML systems, according to Susan Wojcicki, CEO of YouTube. Facebook has improved tools to combat bait engagement posts and pages, to expand harassment prevention features and to use AI for terrorist propaganda removal.
If that seems like a lot, its not. Search engines and social media platforms, among others, have had harassment, bullying and violence rules in place for years. Recent scrutiny, especially in terms of national security and privacy, have upped the ante for internet companies, who are cracking down again on "fake news," malicious actors and hateful content.
But, as the NYC bill shows, monitoring content is not enough and bodies also need to look at what goes on behind the scenes. Big Data and AI are starting to run the world, and algorithms have long moved on from helping a company identify inefficiencies to law enforcement and customized consumer interface applications.
Biases in AI systems can be introduced during the programming itself, as a system learns or during application from a variety of sources. A lack of diversity among the programmers making the system and biased data sets can cause these problems.
When a city like New York or a company like Facebook uses compromised algorithms, openings for discrimination along racial, gender, ethnic and other lines arise. Lawmakers today, from the local to federal level, are grappling with these potential biases in technologies that are still largely unregulated.
Yet many experts maintain that heavy-handed government oversight is not the best way to deal with these problems, and the onus should remain with the enterprise. Under this framework, reactive rules can address problems as they arise, whereas proactive regulations would stymie technology development and reduce investment.