Dive Brief:
- AI scientists and top executives are equating the risks of AI to pandemics and nuclear war in a short statement published Tuesday by The Center for AI Safety. “Mitigating the risk of extinction from AI should be a global priority," the statement said.
- OpenAI CEO Sam Altman was among the more than 300 signatories, which included Demis Hassabis, CEO of Google DeepMind; Kevin Scott, CTO at Microsoft; and Dario Amodei, CEO of Anthropic. There are no signatories from Amazon or its generative AI partner Hugging Face.
- The Center for AI Safety acknowledged the difficulty of voicing concerns about some of AI’s most severe risks as enterprises rush to adopt the technology and large companies experience stock price boosts after they announce AI initiatives. The goal of the statement is to spark discussion and create common knowledge of the number of experts who take advanced AI’s most severe risks seriously, the group said.
Dive Insight:
This isn’t the first time a swarm of high-profile names in AI gathered under an online statement. In March, there were calls for an industrywide pause on AI development and training of AI systems more powerful than GPT-4.
At the time, more than 1,100 AI and technology experts, as well as former presidential candidates and engineers, supported the six-month moratorium. The open letter published by The Future of Life Institute now has more than 31,000 signatories.
Despite the buzz it created, the open letter fell flat. Some executives found it to be vague or abstract, while others viewed it as more of a symbolic statement acknowledging the harm AI could cause.
“I actually signed that letter,” Gary Marcus, professor emeritus at New York University, said during the Senate Judiciary subcommittee hearing earlier this month. “I took the letter… spiritually not literally.”
“I don’t know that we need to pause that particular project,” Marcus said, referring to systems more powerful than GPT-4. “But I do think [the letter's] emphasis on focusing more on AI safety and trustworthy, reliable AI is exactly right.”
The statement released Tuesday doesn’t have the specific call to action element that the open letter illustrated, though the result might end up being the same: directing discourse.
The statement follows AI regulation becoming the topic of conversation in Congress, at the G7 summit and within the White House.
Altman, who has spoken to leaders around the world within the past month, has changed his tune on regulation since his tour began. When testifying earlier this month before a Senate Judiciary subcommittee, Altman called on Congress to help establish global safeguards and standards for AI use.
Then in Europe, Altman said that if regulation crossed a line, OpenAI would cease operating in the European Union before reversing course two days later.
“Very productive week of conversations in Europe about how to best regulate AI,” Altman tweeted last week. “We are excited to continue to operate here and, of course, have no plans to leave.”