Dive Brief:
- Senator Ted Cruz, R-TX, introduced a new AI regulation proposal, called the Sandbox Act, Wednesday as part of his role as U.S. Senate Commerce Committee Chairman.
- The bill creates a waiver program that lets developers test and launch AI technologies without being subject to federal rules, as directed by President Donald Trump’s AI Action Plan. Risk mitigation is also on the docket, focusing on public safety and fraud risks.
- “To advance American leadership in AI, the United States should adopt a light-touch regulatory approach, working alongside the President to advance legislation that promotes innovation and long-term growth while protecting against nefarious uses of the technology and embracing values of human dignity and flourishing,” the Committee said in a statement.
Dive Insight:
CIOs are keen on figuring out how new rules or approaches to regulation will impact their AI initiatives. Federally, the U.S. has stayed the course on its deregulatory approach laid out by the Trump administration in its first days in office.
Earlier this summer, Trump signed a trio of AI-focused executive orders shortly after the administration’s AI Action Plan was released. The latest legislative framework is based on the presidential decrees and the 90-plus federal policy actions outlined in the plan.
The light-touch legislative framework addresses five key areas: American innovation and long-term growth, free speech, reduction of patchwork rules, prevention of nefarious uses of AI and bioethical considerations.
“The AI framework and Sandbox Act ensure AI is defined by American values of defending human dignity, protecting free speech, and encouraging innovation,” Cruz said in a statement.
As part of the bill, AI deployers and developers could apply to modify or waive regulations “that could impede their work,” according to the announcement. Businesses wanting to participate would submit their requests to be reviewed by the Office of Science and Technology Policy.
Responses to the framework have been swift. The R Street Institute characterized the approach as a “constructive blueprint to help ensure America wins the AI future,” in a Wednesday blog post published by the Washington D.C. think tank. Cruz said the Abundance Institute and the Information Technology Council are also in support.
Critics, such as consumer advocacy group Public Citizen and The Alliance for Secure AI, expressed accountability concerns.
“Companies that build untested, unsafe AI tools could get hall passes from the very rules designed to protect the public,” J.B. Branch, big tech accountability advocate at Public Citizen, said in a statement Wednesday. “It guts basic consumer protections, lets companies skirt accountability, and treats Americans as test subjects.”
Enterprises are still keeping an eye on evolving state-level AI rules as well as those abroad, such as the European Union’s AI Act.