Skip to main content

Has the AI safety movement stalled?

AI Safety
Technology, Media & Telecomms
Digital and Insight
ai
artificial intelligence
News

If you read the official press release, the second global AI safety summit has already been a great success. 

South Korea and the UK, which is co-hosting the event, have rightfully made much of the new commitments to develop the technology in a non-harmful manner. 

Some of the biggest corporate players, including OpenAI, Microsoft and Amazon, have backed the pledge, something Prime Minister Rishi Sunak has hailed as a “world first”. 

But if you start reading into the small print, how game-changing are these commitments? 

The main pledge seems to be a promise to publish a safety framework on how the companies will “measure risks of their frontier AI models”. 

The frameworks will also “outline when severe risks, unless adequately mitigated, would be “deemed intolerable” and what companies will do to ensure thresholds are not surpassed”. 

These guidelines are naturally a step in the right direction. But they do raise an obvious question: who will enforce them? The voluntary nature of the agreement means it won’t be agents acting for nation states or international bodies. 

Instead, it will be up to the businesses to police their own protocols. But what if the companies disband or fail to form their own safety teams? 

Sam Altman’s OpenAI has turned this hypothetical question into reality by dissolving its “superalignment” team. Ilya Sutskever, a co-founder of OpenAI, its chief scientist and co-leader of the team, left the company last week. So did the team’s other leader, Jan Leike, who wasn’t as subtle as Sutskever. 

Leike accused OpenAI of prioritising “shiny new products” over safety concerns. “I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point,” he added. 

A group of leading AI scientists, including Geoffrey Hinton, Andrew Yao and Dawn Son, have also sounded the alarm on the lack of progress around AI safety since the Bletchley Park Summit in November. 

The 25 academics, writing in the Science journal, argued that AI safety research is in the slow lane, making up between 1-3% of all AI research. Just as worrying, the scientists claim that governments, business and wider society do not have the mechanisms or institutions in place to prevent generative AI misuse. 

“Institutions should protect low-risk use and low-risk academic research by avoiding undue bureaucratic hurdles for small, predictable AI models. The most pressing scrutiny should be on AI systems at the frontier: the few most powerful systems, trained on billion-dollar supercomputers, that will have the most hazardous and unpredictable capabilities,” the academics argued.

And beyond the technology itself, there has been little public debate about the side effects of building and maintaining large language models. Microsoft’s emissions jumped by almost 30% between 2020 and 2023 in large part because of its investment in generative AI infrastructure. 

Berkshire Hathaway’s Chairman and CEO Warren Buffett has also warned about the amount of capital expenditure and infrastructure companies will need to keep up with a growing electricity demand. 

One of the results could be that energy companies are forced into state ownership because private investors are no longer rewarded with desirable returns. 

“When the dust settles, America’s power needs and the consequent capital expenditure will be staggering,” Buffett said 

For now, we will have to wait for the dust to settle in Seoul to see if the AI safety movement has moved forward or not.