Skip to main content

The SEC Newgate AI Weekly

AI Concept
By Matthew Ford
27 April 2023
Digital and Insight
artificial-intelligence
ai
News

Can governments keep us safe from AI?

It seems that every week there are more huge announcements on AI, as new products are launched that will change the way we work. In the world of content creation, Adobe Firefly’s beta went live, and looks set to radically change the world of graphic design and video editing.

But as these announcements continue to take place, there has been some unease about the rapidity of AI development, most notably from tech entrepreneur Elon Musk. Musk has warned that AI could pose an existential threat to humanity and has called for greater regulation of AI technologies, even calling for a six month pause in its development.

This has not been a concern shared by the UK Chancellor, Jeremy Hunt. Speaking at a recent Politico Live event Hunt stated he didn’t “buy” that AI would lead to workers being replaced. Instead of supporting Musk’s call for a pause, Hunt argued:

“…we have to win the [AI] race, and then be super smart about the way we regulate it so that it is a force for good, and enhances the values that we all believe in” (my emphasis).

So, what is going on with AI regulation and what is the UK Government doing to ensure AI’s safety?

On the 24th of April, the UK government announced an initial investment of £100 million to fund the creation of an expert task force that will help the UK build and adopt the next generation of ‘safe’ artificial intelligence (AI). This investment is aimed at accelerating the development and deployment of AI, while also ensuring that it is safe and trustworthy.

The task force, which will be led by the Office for Artificial Intelligence (you read that right), aims to work closely with industry, academia, and the public sector to identify the key opportunities and challenges in the development and adoption of AI. It will also develop a roadmap for the safe and ethical deployment of AI, which will include guidelines for the development and use of AI technologies.

This announcement comes as the EU Commission announced it will invest €1 billion each year in AI, through its Horizon Europe programme. The EU investment is aimed at developing AI technologies that are trustworthy, ethical, and respect European values and fundamental rights. The EU investment will also fund research in areas such as healthcare, climate change, and energy, and will support the development of AI skills and talent.

But at the same time, the EU is also bringing forward an Artificial Intelligence Act, which would be the first law on AI by a major regulator and seeks to protect fundamental rights at risk from AI. This is in addition to bringing forward an AI Liability Directive, which would harmonise AI rules across the EU and make it easier for victims of AI related damage to gain compensation.

But while governments start regulating, private companies keep investing. In the US alone, private investment in AI reached $52.88 billion in 2021, with $17.21 billion in China. The UK was some way behind at $4.65 billion, still a significant sum.

Interestingly, the largest share of this private investment in recent years has gone into the medical and healthcare sector.

So here is the paradox, or oxymoron if you will, when it comes to AI and safety. When you are trying to win a global race, to use Hunt’s words, a race where you are already some distance behind, what role will an emphasis on safety play in winning that race? Will legislation on safety speed you up or slow you down, compared to your competitors?

 

That is a question that should concern us all, because with tens of billions of dollars being pumped into medical Artificial Intelligence, AI may not take your job, but it could be the thing that keeps you alive.