Your company’s reputation is being rewritten by AI. Are you paying attention?
For decades, corporate reputation has been shaped by a relatively stable set of forces: media coverage, analyst commentary, stakeholder relationships and the occasional crisis that tests whether a company's messaging infrastructure can hold under pressure. Communications advisors have built sophisticated capabilities around each of these. But a structural shift is now underway that most have not yet accounted for, and its implications for the clients we serve are significant.
When a potential customer, investor or journalist wants to know about a company today, there is a growing chance they will not type the name into a search engine at all. They will ask ChatGPT, Claude, Perplexity or Google's AI Overviews. And what those systems say about a brand, rightly or wrongly, is fast becoming the first and sometimes only impression that matters.
Research published this month by BrightEdge, based on an analysis of hundreds of millions of prompts, found that Google's AI Overviews were 44% more likely to surface negative information about a brand than OpenAI's ChatGPT, with an estimated 23,000 negative brand encounters per million searches. This is not negative coverage buried deep in search results. AI is pulling historic complaints, outdated product information and legacy controversies to the front of the conversation, often presenting them as though they are current.
The vulnerability extends well beyond inaccuracy into active manipulation. In a BBC Future investigation published last month, journalist Thomas Germain spent twenty minutes writing a fabricated article on his personal website, complete with fictional rankings and a competition that does not exist, and within twenty-four hours both ChatGPT and Google were confidently presenting his invented claims as fact. The experiment was deliberately outlandish, but the underlying point was serious: a single unverified web page was enough to reshape what the world's most widely used AI platforms told millions of users.
The BBC also found that tricking AI chatbots is far easier than gaming traditional search engines, calling it a "Renaissance for spammers". The implications for corporate clients are clear: if a journalist can manipulate AI outputs with a blog post, so can a disgruntled former employee, a short seller, a competitor or an activist group with a specific agenda. We are already seeing this happen ourselves with several of our clients.
Separately, Wired reported that the threat now extends beyond misinformation into outright fraud, with scammers successfully embedding fake customer service phone numbers into the web pages that feed Google's AI Overviews, leading users who trust the AI-generated summary to call fraudsters posing as legitimate companies. For any organisation whose reputation depends on public trust, whether in financial services, healthcare, energy or consumer goods, it is easy to see how this can quickly become an important reputational challenge.
There are three areas that require focus. Firstly, large language models (LLMs) do not verify information the way most people assume. They reconstruct brand narratives from patterns, repetition and language density across available sources, which means that outdated pricing, historic regulatory actions or inaccurate product comparisons can be presented to users as established fact. As the BBC investigation demonstrated, the AIs rarely mentioned that the journalist’s fabricated article was the only source on the entire internet for the claims they were confidently repeating.
Secondly, the feedback loop is almost entirely invisible. Unlike traditional media monitoring, where coverage can be tracked in something close to real time, AI conversations happen in private and leave no analytics trail. A company may only discover it is being misrepresented when a prospect mentions that a chatbot recommended a competitor, or when inaccurate claims surface in investor queries. The BBC article quotes research that suggests users are 58% less likely to click on a source link when an AI Overview appears at the top of a search, meaning fewer people are doing the kind of verification that once served as a natural check on misinformation.
Thirdly, the established tools of reputation management do not map neatly onto this new terrain: traditional SEO, media relations and crisis response protocols were built for a world in which search engines indexed web pages and humans read the results, whereas LLMs synthesise information, weight sources in opaque ways, and produce outputs that can vary depending on phrasing, timing and geography.
We need to adapt. The starting point is straightforward: audit how the major AI platforms currently describe a client's brand, their products, their leadership and their competitors, documenting what is accurate, what is outdated and what is missing entirely. From there, the focus should shift to what the emerging field calls "citation worthiness": producing structured, authoritative and regularly updated content that LLMs are more likely to surface accurately.
Original research, proprietary data, clear corporate positioning and consistent messaging across owned and third-party platforms all strengthen how AI systems represent a company over time. And at the most basic level, every unanswered negative review, every outdated corporate biography, every inconsistent product description becomes raw material for AI to synthesise. In a world where LLMs average conflicting sources rather than verifying them, it is important to be consistently active across all your company’s touchpoints.
Our recommendation is that you don’t wait for an AI reputational incident but take a proactive approach to address any potential issues before they surface, as well as making the most of the opportunities that are presented by LLMs.
If you are keen to discuss this further, please get in touch!