Combatting AI-driven disinformation and its rising reputational risk
As the fighting in the Middle East intensifies, so has a parallel online misinformation war around the events in the region. Bolstered by generative AI technology, this is distorting reality for citizens, driving income for creators, and causing a further headache for social media platforms on how to respond.
These developments offer valuable learnings for corporate leaders who are faced with a new online reality for protecting against and preparing for falsehoods that can travel rapidly, affect public perception, and damage trust.
Since the start of the war in Iran, there have been a range of AI-generated videos and fake satellite imagery which have made false claims about the conflict, collectively accumulating hundreds of millions of views online.
With many people searching for information on social media about the conflict, algorithms across major social media platforms have been promoting misleading content that has been generating a large number of views, likes, shares and comments.
The significance of the activity was brought home by X, which responded to the activity by stating that it will suspend creators from its monetisation programme temporarily if they post AI-generated videos of armed conflict without a label. This notably has not yet been adopted by TikTok and Meta.
This moment represents one of the first major conflicts following huge improvements in AI image and video generation. For example, Nano Banana 2, Google’s image generation tool, which launched in February this year, was a leap forward in delivering image realism and detail, making it easier to create hyper-realistic images. Such tools are making it harder for audiences to distinguish between documentation and deception, at a price close to zero.
Our Managing Director, Digital, Tom Flynn, calls this movement the democratisation of influence, and it has significant relevance for businesses.
Previously, a whole team of skilled operators would be required to create a realistic campaign to wage an information war. The advent of AI tools has not only added a very plausible layer of authenticity but is reducing the size of the team down to one.
These indicators align with the World Economic Forum’s (WEF) assessment about the risks that come with misinformation and disinformation, which is ranked second as a risk in a two-year timeframe in its Global Risks Report for 2026, stating that these technological risks are growing, and are being largely unchecked.
For corporate leaders, the development of AI has put misinformation to the top of their in-tray, given the risks the technology poses to corporate reputation. Falsehoods can
have immediate and significant consequences for business, which can rapidly lead to public sentiment shifting without intervention.
In the short term, these point to reviewing crisis procedures and exploring perception audits, which can review weak points where there may be a kernel of truth. These are the hidden risks that can be exploited by bad actors or be the catalyst for panicked customers in the event of misinformation spreading. And in the medium to long term, a suite of activity in bolstering corporate reputation should be considered.
With the latest developments comes a brighter spotlight on the importance of newsrooms, which are the conduit between citizens and the story. With ever more advanced tools in low-resourced hands, journalists are working harder than ever to assess the facts of a story.
As the saying goes, “A lie can travel halfway around the world while the truth is still putting on its shoes”. Once falsehoods are believed, if only for a short time, the damage is done. In a world where this saying has never been more accurate, preparation is everything.