Sorry, not sorry: Do AI apologies miss the mark?

Crisis communications has always been a balancing act – speed versus accuracy, clarity versus emotion. But in recent months, artificial intelligence (AI) has begun to shift that balance. Not just behind the scenes, but increasingly at the front line of response.
Across sectors, AI is being used to draft reactive statements, monitor sentiment in real time, and simulate stakeholder reactions. In some cases, it is helping organisations respond faster than ever. In others, it’s raising eyebrows.
Take Metrolinx, the Canadian transport agency, which faced backlash earlier this summer after issuing an AI-generated apology to frustrated concertgoers following a Coldplay show in Toronto. Fans had complained about having to leave early to catch the last northbound train, and the agency responded via social media with a message drafted by AI. The post was quickly criticised for being impersonal and dismissive and was later deleted. Metrolinx admitted that a vendor had used AI against internal guidance and confirmed that AI is now banned from customer-facing communications.
The incident wasn’t catastrophic, but it was telling. In moments of reputational risk, audiences expect accountability, and they can tell when it’s missing. A message that’s technically perfect but emotionally flat can do more harm than good. The backlash about the tone of the apology - and tone, in crisis comms, is everything.
Elsewhere, AI is being used more strategically. Some organisations are feeding historical data into large language models (LLMs) to simulate how different audiences might react to various types of incidents. It’s a smart and efficient practice that’s quietly reshaping how crisis teams prepare. Platforms are emerging that offer real-time guidance, sentiment tracking, and even draft messaging based on live data. The tools are getting sharper, but the stakes are getting higher.
There’s also the issue of accuracy. Earlier this year, UK courts issued warnings after lawyers submitted AI-generated legal references that turned out to be fabricated. While not a crisis comms example directly, it is a cautionary tale. In high-stakes environments, errors like this carry serious consequences.
So where does this leave us?
AI is clearly becoming part of the crisis comms toolkit. It’s fast and increasingly sophisticated. It can help teams stay ahead of the curve, monitor sentiment shifts, and prepare for multiple outcomes. But it’s not a substitute for human judgement – and it shouldn’t be treated as one.
The danger lies in over-reliance, in assuming that speed equals effectiveness, and mistaking fluency for empathy. Used carelessly, AI can flatten nuance, miss emotional cues, and erode trust. Used well, and with oversight, intention, and transparency, AI can enhance preparedness, support decision-making, and free up communications professionals to focus on what matters most.
The fundamentals of communications haven’t changed: clarity, empathy, and credibility. However crisis comms is evolving, and AI is part of that evolution. The challenge now is knowing when to reach for AI assistance and when to let the human voice lead.