Dive Brief:
-
NYU researchers claim they have figured out a way to manipulate artificial intelligence systems powering self-driving cars via a secret backdoor, according to Next Gov.
-
The group published a non-peer-reviewed paper demonstrating a 90% success rate in manipulating AI systems' image recognition of stop signs.
-
The AI system in question worked normally until a trigger — a photo of a Post-It note, bomb sticker or flower sticker — was detected, which caused the software to mistake one object for another. This backdoor demonstrated how a self-driving car could mistake a stop sign for a speed limit sign.
Dive Insight:
The technique highlights a weakness in AI overall: it must to be trained by someone, and that creates an opportunity to introduce biases or leave backdoors open for hackers to enter at a later date. Experts say a method for validating trusted neural networks needs to be established to ensure such systems don't include these backdoors.
A number of studies have already shown how unconscious bias can slip into machine learning algorithms like personalized online advertising if efforts are not made to ensure fairness.
For example, research released in April showed AI programs can exhibit racial and gender biases. These could be particularly vexing as machine learning moves into areas such as credit scoring, hiring and criminal sentencing.