Dive Brief:
- New York Governor Kathy Hochul signed legislation establishing safety and transparency frameworks for frontier AI models, according to a Friday announcement.
- The Responsible AI Safety and Education (RAISE) Act will require large AI developers to create and publish information regarding their safety protocols. It also mandates developers to report model safety incidents to the state within 72 hours of learning an incident occurred, according to the announcement. The law takes effect Jan. 1, 2027.
- “By enacting the RAISE Act, New York is once again leading the nation in setting a strong and sensible standard for frontier AI safety, holding the biggest developers accountable for their safety and transparency protocols,” Hochul said in the announcement.
Dive Insight:
Hochul’s move to sign AI model safety and transparency requirements into law is a shot across the bow at President Donald Trump’s executive order challenging U.S. states’ ability to regulate the technology.
Hochul described the RAISE Act as a “nation-leading approach to AI safety” that takes inspiration from other states regulating the technology, including California’s Transparency in Frontier Artificial Intelligence Act.
“This law builds on California’s recently adopted framework, creating a unified benchmark among the country’s leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public,” Hochul said.
It is the first state AI law to be enacted since Trump signed his executive order earlier this month, which mandates creation of the AI Litigation Task Force to challenge state AI laws that interfere with existing federal laws or “unconstitutionally regulate interstate commerce.” The order was the culmination of a year spent removing federal agency enforcement power and reducing regulations. It also aligns with outcry from tech companies about the burdensome patchwork of state AI laws.
Trump’s AI Litigation Task Force will be established within 30 days of his executive order. It remains to be seen which states it will target first, although Colorado was specifically named in the order as problematic for its AI law, the Colorado AI Act, banning algorithmic discrimination.
New York’s RAISE Act will require large AI model developers such as OpenAI and Anthropic to document technical and organizational safety and security protocols as well as testing and evaluation procedures. It also requires developers to designate a senior personnel member to be responsible for ensuring compliance.
The law implements a 72-hour reporting requirement for AI model safety incidents, which could include frontier models autonomously engaging in behavior outside of what a user requested, critical failure of technical controls, or theft, malicious use or unauthorized access to the frontier model.
Developers who fail to submit required RAISE Act reporting or make false statements could face civil penalties of up to $1 million for a first violation and up to $3 million for subsequent violations.
“Big tech oligarchs think it’s fine to put their profits ahead of our safety — we disagree,” State Senator Andrew Gounardes (D-N.Y.) said in the Friday announcement. “With this law, we make clear that tech innovation and safety don’t have to be at odds.”