Generative AI and Reputation Risk: Remember your eyes can deceive you.
In less than eight weeks’ time, world leaders gather at Bletchley Park here in the UK for a summit on the safety of artificial intelligence. They must decide how to regulate what is both the most promising and the most challenging technological transformation of our time.
It’s a daunting prospect. It includes everything from how to address threats to global democracy to finding restraints on the use of AI in warfare. Also high on their summit agenda should be how we prevent the power of AI to create fake images and misinformation from distorting our perception of reality.
Ahead of November’s AI summit, SEC Newgate UK this morning brought together a panel of experts from the worlds of tech, regulation, the media, and law to share insights on how to manage the wider reputational risks of Generative AI.
If you’ve yet to experience for yourself its potential for spreading misunderstanding and confusion, just ask my colleague Tom Flynn to explain what Generative AI thinks of him.
SEC Newgate’s Head of Digital, according to Google Bard, has an MA in Politics from the University of Oxford. Tom also ran the digital campaign for the Labour Party’s 2015 general election campaign as well as the ‘Yes’ campaign in the Scottish independence referendum of 2014. Yet the only true statement from that impressive list of achievements is Tom’s name and job title. The rest is all a distortion or hallucination as they are known.
And it doesn’t stop there. The problem of hallucination, as the industry terms the way in which Generative AI seeks to tell you what it thinks you want to hear, has not had the same attention as the potential of deepfake videos to spread mischief and mayhem As the cybersecurity expert and CEO of Autonomate, Jamie Claret told us today, “there’s no failsafe way of spotting whether a deepfake video is just that or not.”
Earlier this summer, at the PR Week Crisis Comms conference, SEC Newgate ran a simulation to demonstrate how the combination of an AI-generated fake video of a CEO, combined with a sudden social media storm, can cause untold financial and reputational damage. That’s one of the reasons why tech journalist Shona Ghosh, Deputy Executive Editor of Insider, thinks journalists must re-learn how to verify what their own eyes tells them is real.
There was a consensus reached during today’s discussion that Generative AI has the potential to be a double-edged sword. In the words of Riccardo Tordera of The Payments Association, “Fraud is one of the biggest issues in our industry. There is a hope that AI will fix many of the issues. On the other hand, if the fraudsters are there first with the use of AI, then that’s a problem.”
Getting ahead of the curve does require a shift in mindset and approach for many communications professionals. As Tom Flynn of SEC Newgate explained the days of corporates saying nothing, or individuals seeking anonymity are over. “Online profile cleans ups are now more important than ever, as are changes to your SEO strategy.”
And as our panellist and corporate lawyer, Jee Ha Kim, partner at Bird & Bird highlighted, any potential regulation or legal precedent, either in the US or here in the UK, lags far behind the pace and scale of the technological transformation underway.
AI hype has managed to capture the headlines and the attention of the world’s business, political and media communities. And yet as our discussion today at SEC Newgate shows, few businesses and politicians have grasped the wider implications of the technology, especially for reputational risk. Let’s hope that’s one fact which can change fast.
For more information about how we can help you and your business manage and mitigate the potential risks posed by generative AI, or for more info on future SEC Newgate tech events, get in touch here.