Reading the room on AI
The debate around the use of generative AI is not only limited to questions of speed or efficiency. Indeed, conversations are now shaped as much by emotion, ethics, and questions of trust as by technical capability.
Two recent stories provide a useful snapshot of where sentiment is heading and why organisations need to pay closer attention to how they talk about AI, not just how they deploy it.
The Guardian recently covered the story of a British AI company whose advertising campaign, installed prominently at a UK airport, featured an image of a young woman next to the tagline: “Meet your new AI employee. Always on, never sick and no HR required.”
The imagery and language were swiftly criticised for reinforcing sexist workplace stereotypes, with one campaigner describing it as “misogyny with a marketing budget”. While the company defended the ads as a provocation designed to spark debate about automation, the billboards were removed after complaints were lodged with the Advertising Standards Authority, with an airport spokesperson saying: “The third-party company that arranges advertising at the airport removed the advert after concerns were raised regarding the content.”
What the backlash revealed was not outrage at AI itself but discomfort with what the messaging implied. The ad was promoting technology by tapping into long‑standinganxieties around work and obsolescence, whilst dressing them up as innovation.
This reaction lands at a moment when a parallel trend is picking up pace. According to a recent BBC article, creative-industry organisations - and businesses more widely - are clamouring to declare where AI has not been used. Labels such as “Proudly Human”, “Human-made”, “No AI”, and “AI-free” are appearing on films, books, marketing materials and websites. BBC News counted at least eight different initiatives now trying to establish a Fair‑Trade‑style certification for human authorship in response to fears about automation, and the erosion of human creativity and craft.
Taken together, these developments suggest a market in tension. Organisations are embracing AI behind the scenes (though whether productivity is really being improved with the use of AI is another question entirely), while audiences are asking for greater reassurances.
We are, ultimately, conflicted: we want the benefits of AI, but not at the cost of transparency or trust. Crucially, these recent events suggest the pushback is not against the technology itself, but against the language and positioning that celebrates human redundancy, obscures AI’s role, or frames replacement as progress - an idea with which we are understandably still uncomfortable.
For communicators, this is important. AI cannot be treated as a purely operational upgrade, explained through product demos or efficiency metrics alone. How organisations describe its use - the metaphors they choose and the promises they make - is increasingly central to reputation.
For businesses, then, the lesson is not to retreat from AI adoption, but to read the room more carefully. Over‑simplified narratives about automation risk alienating audiences who are still working out how to feel about this change. Equally, vague or self‑certified claims about being “AI‑free” risk confusion and push-back unless they are clear and credible.
Most people are undecided about AI, not hostile. They want honesty and clarity; to know when AI is involved, and when it isn’t. For businesses, this creates a clear opportunity: to lead with thoughtful communication, responsible framing, and transparency, and to treat AI not only as a tool for transformation, but as a conversation that needs careful stewardship.