AI is evolving at break-neck speed, faster than existing regulation can catch up.
As the technology gains new capabilities, the risks surrounding AI's impact on data privacy call for cohesive legislation, according to Rep. Jay Obernolte, R-CA.
"When it comes to AI, the No. 1 harm in my mind right now, in the early days of AI implementation, are AI's uncanny ability to pierce through digital data privacy and reaggregate disaggregated personal data, and build behavioral models that are eerily accurate at predicting future human behavior," said Obernolte, speaking last week on a panel hosted by The Software Alliance, also known as BSA.
Obernolte is hopeful Congress can pass comprehensive federal data privacy standards this year, he said. But that’s just step one.
As generative AI evolves, its broad effects on society suggest a more cohesive, thoughtful approach to legislation is needed, one that takes into account the intent behind AI use and the different types of harm that could stem from the technology's development.
One pitfall to avoid is a regulatory patchwork, according to Christina Montgomery, IBM's VP and chief privacy and trust officer.
"What we're seeing happening in the privacy space would be completely unworkable," Montgomery said, speaking on the same panel. "In terms of AI, we can't have 50 states regulating it differently. We can't have countries and governments around the world regulating differently."
"I think the need for harmonization is really critical," she said.
Shaping a policy
A regulatory framework must be built around current legislation and best practices.
"There needs to be a noted focus on carrying the laws that are already available," said Leah Perry, chief privacy officer and global head of global policy at software company Box. This includes the AI principles produced by the Organisation for Economic Co-operation and Development and the AI Risk Management Framework developed by the National Institute of Standards and Technology.
Perry expressed concern about AI's involvement in consequential decision making, as well as potentially harmful applications of the technology such as deepfakes or surveillance.
"Those are areas where we want to ensure that there is a level of not only testing when it comes to modeling and training but even further, making sure that we're being clear and transparent about what we're doing," Perry said.
Despite risks associated with the technology, executives agree generative AI — the latest and most visible application of AI — is worth pursuing, according to a Gartner survey published last week.
As vendors expand their roster of AI features, regulating becomes more complex.
"The capabilities of AI in general are not going to be uniform," said Charles Romine, associate director for laboratory programs at NIST. "They're gonna be very different for different purposes."
Transparency and explainability are essential to reducing risk in AI implementation, according to Perry.
"If you're not clear about the intended use, or even going a step further, prohibited uses, then when it's ultimately applied, and the end user is applying that technology, they have no clear guardrails either."