Skip to main content

Reputation in the age of automation: Unpacking the West Midlands policing crisis

Police men back facing large crowd
By Gwen Samuel
15 January 2026
Digital, Brand & Creative Strategy
Public Affairs & Government Relations
News

This week, West Midlands Police found itself at the centre of a very modern crisis — one that didn’t begin with a policy failure or human misjudgement, but with an AI‑generated fiction.

The force’s chief constable, Craig Guildford, has apologised to MPs after admitting that false intelligence, used to justify barring Maccabi Tel Aviv fans from attending their Europa League match against Aston Villa in November, originated not from human research, but from Microsoft Copilot. The fabricated reference to a non‑existent match between Maccabi Tel Aviv and West Ham United was passed to the local Safety Advisory Group, ultimately influencing its decision to block travelling fans on safety grounds.

Initially, Guildford told MPs the error came from “a Google search by one individual”. It later emerged that AI had been used, prompting sharp questions about transparency, process and oversight. The fallout has been significant: political criticism, public scrutiny, and a Home Secretary stating she has “no confidence” in the chief constable pending formal review. 

Yet another AI-fuelled public embarrassment in the headlines. These stories are quickly becoming a fixture and should serve as a warning for any institution adopting AI, but particularly those whose decisions carry real‑world consequences. 

Increasingly, AI is being integrated into workflows across organisations, making decisions that were once exclusively down to human judgement. Granted, these tools streamline research, accelerate analysis and reduce administrative burden. But as this episode shows, when AI outputs make their way into high‑stakes decisions without proper oversight, the reputational risk can be severe.

Three themes from this case stand out. Firstly, AI doesn’t remove human responsibility – it heightens it. AI tools are only as reliable as the oversight that surrounds them. When a model surfaces inaccurate information, the question shouldn’t be “why did the AI get it wrong?” but “why did no one check?”.

The optics of an organisation deferring to automation — especially on security decisions — undermine credibility.

Secondly, transparency failures escalate crises. Guildford’s initial assurance that the AI was not used created a second problem: the perception of a cover‑up, even where none was intended. In the communications landscape, an honest mistake is survivable with the right damage control; a reputation tarnished with inconsistent explanations is harder to salvage.

And thirdly, public trust in AI is already fragile; missteps like these only accelerate the erosion. Across government, business and public services, AI is being embedded faster than the public understands it. High‑profile errors like this reinforce a narrative that AI is unreliable, and organisations using it inherit that reputation.

While industries race to integrate AI tools into everything from content development to research, it’s imperative to remember one critical point: your reputation will remain your responsibility, even when your tools make the mistake.

Governance, then, needs to keep pace with adoption. Teams require clear guidance on where and how AI should be used in the workplace, and who is accountable for sign‑off. In a world where AI is ubiquitous, being able to confidently state how it is being used is the reputational differentiator. 

The West Midlands incident serves as a case study in how crisis communications must now evolve to include these AI‑specific failure modes. It’s a timely reminder: new tools demand new discipline.