California Governor Gavin Newsom signed Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act, into law Monday. The bill will have implications for CIOs and enterprises relying on some of the top AI providers.
The state will require large frontier developers to publish a framework detailing how they incorporated best practices and standards into their AI models. The legislation also creates a mechanism for AI companies and the public to report critical safety incidents and protects whistleblowers who report risks posed by frontier AI models.
Newsom vetoed a previous AI safety bill, SB 1047, nearly a year ago, noting that the prior bill failed to address smaller AI developers, among other concerns.
SB 53 establishes two sets of requirements. The first applies to “large frontier developers,” a category defined by computing power and a high revenue threshold. The second set of rules focuses on “frontier developers” defined by computing power and complexity of the model without the same revenue threshold.
“If we’re looking at just a frontier developer as opposed to a large frontier developer, there is an additional obligation regarding transparency but it’s much more circumscribed,” said Lily Li, founder of law firm Metaverse Law. The law establishes significant penalties for companies who fail to comply, with fines of up to $1 million per violation.
Enterprises deploying the technology will need to be in tune with the AI frameworks to be published by large frontier AI model developers as part of the new law’s requirements. Failing to do so could mean risk for enterprises, as vendors will publicly disclose use restrictions on their models, Li said.
Businesses will need to go even further by adapting procurement, compliance and risk management processes to ensure partners and internal management practices meet the California law’s requirements, said Jason Schloetzer, an associate professor in the Georgetown University McDonough School of Business.
“Enterprises procuring AI solutions from major vendors that will now be compelled to publicly document risk management, safety frameworks, cybersecurity and governance practices will need to scrutinize these transparency reports to ensure vendor compliance,” he said.
The law could also apply to enterprises running data centers or building large AI models, Schloetzer said.
“Enterprises operating large data centers or fine-tuning large models in-house may be subject to certain direct obligations, such as hiring third-party auditors or reporting specific incidents,” he said.
More regulation ahead
As California moves on AI laws, so are other regulators.
“CIOs at large, multi-state companies should anticipate further AI rules and proactively prepare for evolving regulatory requirements,” Schloetzer said.
California’s AI bill comes as federal policymakers are making their own moves to regulate the technology despite a deregulatory agenda initiated by the executive branch.
Sens. Josh Hawley, R-MO, and Richard Blumenthal, D-CT, on Monday introduced the Artificial Intelligence Risk Evaluation Act, which would create an advanced AI evaluation program within the U.S. Department of Energy to evaluate advanced AI models and collect information on the likelihood of adverse events. The law would require frontier AI model developers to participate in the program.
The bipartisan legislation indicates a potential shift in U.S. AI policy from a deregulatory agenda to more hands-on federal oversight of advanced AI systems that could pose national security, safety and labor market risks, Schloetzer said.
“It also suggests more legislative initiatives are coming, and that the regulatory bar for deploying advanced AI solutions in the U.S. is rising,” Schloetzer said.
Newsom, in a note issued with the SB 53 signing announcement, said that if the federal government or Congress adopt national AI standards maintaining or exceeding California’s protections, further action will be taken to align the policy frameworks so businesses aren’t subject to conflicting requirements.
“In enacting this law, we are once again demonstrating our leadership, by protecting our residents today while pressing the federal government to act on national standards,” Newsom wrote.
The California legislation is only one of many competing AI laws being enacted in the U.S., said Hodan Omaar, senior policy manager at the Information Technology and Innovation Foundation.
AI vendors are opposed to a legislative patchwork of U.S. state regulations on the grounds that it creates complex compliance challenges for businesses. Meta launched a lobbying effort earlier this month to back candidates who are friendly to the tech industry and align with the company’s views on AI laws.
“The more entrenched this patchwork becomes, the harder it is for Congress to assemble a coherent national framework that can actually govern frontier AI,” Omaar said