Skip to main content

A fake bridge, real consequences: Why the UK still isn’t ready for AI

crumbling yellow bridge AI
By Blessing Ekundayo
11 December 2025
Energy, Transport & Infrastructure
Technology, Media & Telecoms
ai
news
News

The recent BBC report of an AI-generated image showing a bridge collapse is more than just another online hoax. The image looked convincing enough that it actually delayed train services while officials confirmed it wasn’t real. A single synthetic photo briefly disrupted critical infrastructure. That should concern all of us.

This isn’t an isolated quirk of advanced technology; it’s a failure of governance. As sectors from rail networks to courts increasingly rely on automated systems, the consequences of inaccurate, manipulated, or malicious AI outputs escalate. Open-source generative models (AI systems whose underlying code and model weights are publicly available, allowing anyone to download, use, modify, or repurpose them), in particular, illustrate this danger. Without the guardrails that commercial systems impose, malicious actors can use them to generate disinformation, phishing, malware and other harmful content with minimal oversight. 

Yet, in the UK, government action remains piecemeal and slow. Unlike the EU, which is rolling out a risk-based AI Act, the UK has so far resisted a standalone statute. Instead, ministers have delayed AI regulation in favour of a “comprehensive” bill that may not be introduced until mid-2026 - a year or more after it was first raised. Current policy continues to rely on principles-based frameworks enforced by existing regulators, rather than dedicated, enforceable rules on safety, transparency, liability or oversight. 

This hesitation has prompted growing criticism from inside Westminster. More than 100 MPs and peers have now joined calls for urgent regulation of frontier AI, warning that the government risks being outpaced by industry and global competitors. Their concern is that the UK’s current framework simply does not match the scale of the risks. As AI becomes more embedded in decision making, public services and national security, weak regulation becomes a liability.

The UK has taken some promising steps, such as creating the AI Safety Institute, but these are not substitutes for enforceable rules. Without clear standards for evaluating AI systems and reporting incidents, organisations are left to manage serious risks on their own.

The bridge incident is a cautionary tale, not an anomaly. AI’s capacity to mislead, disrupt, and manipulate is well-documented but policy responses are still largely reactive, fragmented, or too slow to matter. Effective governance isn’t anti-innovation, it’s the foundation of trustworthy technology that protects citizens, infrastructure, and public discourse.

If the UK doesn’t act now with enforceable, transparent and well-resourced regulation, the next AI-induced crisis will not be a delayed train - it will be something far worse.