- California Gov. Gavin Newsom signed an executive order Wednesday to launch a study into the development, use and risks of AI, and to develop a process for the evaluation and deployment of the technology across the state.
- Gov. Newsom included a list of requests for state agencies in the executive order with accompanying time frames. Within 60 days, the California Department of Technology along with other agencies will draft a report to Newsom detailing the most significant, potentially beneficial use cases for deployment of generative AI tools in California.
- By January 2024, California government agencies will issue general guidelines for public sector generative AI procurement, use and training requirements, according to the executive order.
California, home to leading generative AI players such as Google, OpenAI and Anthropic, joins a growing set of governing bodies making moves on AI. Some are prepping for action, while others are, for now, beginning a dialogue.
In July, the Federal Trade Commission launched an investigation into OpenAI to assess whether the company had engaged in unfair or deceptive data security practices. In August, the U.S. Copyright Office issued a public request for comments to make sense of legal gray areas highlighted by growing generative AI use. The White House and Congress have held multiple AI-related discussions with stakeholders in recent months.
But while agencies are focused on oversight of the technology, they also want to know how it could benefit them.
Gov. Newsom's long-term goal is to begin piloting generative AI projects within state agencies by July 2024 under a controlled setting to measure end-user experience and its ability to assist employees. To get there, the California Department of Technology will need to establish the appropriate infrastructure — a plan set to conclude by March 2024.
“We’re neither frozen by the fears nor hypnotized by the upside,” Gov. Newsom said in a statement. “We’re taking a clear-eyed, humble approach to this world-changing technology.”
The state's public sector guidelines will build on the White House’s Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology’s AI Risk Management Framework. Both sets of guidelines should be familiar to tech leaders: The White House’s blueprint was introduced in October 2022 and the NIST framework was issued in January.
Enterprise technology leaders should ground generative AI plans in ethical frameworks to protect employees and customers, as well as to prepare for eventual obligatory rules. Gartner predicts that by 2025, regulations will necessitate enterprise focus on AI ethics, transparency and privacy, according to research published in July.
Vendors have responded to increased scrutiny with new features and initiatives.
OpenAI has added security and privacy guardrails as it faced pressure from regulators around the world and from the public to level up protocols. Microsoft said it would assume responsibility for any potential legal risks involved with using its Copilot services Thursday as the generative AI copyright battle heats up.
Zoom also updated its policy to better reflect its use of customer data to train AI models after receiving backlash for language that allowed the company broad control.
The Center for AI and Digital Policy, which initially sent a complaint to the FTC regarding OpenAI in March, sent a separate complaint last week to the commission about Zoom’s use of AI and ML techniques, claiming the company violated the FTC Act, the 2020 Consent Order with the consumer agency and additional FTC guidance on AI products and services.