Skip to main content

The tell-tale signs of AI-generated writing – and why they might not matter

keyboard
Strategy & Corporate Communications
Digital, Brand & Creative Strategy
News

As AI-generated content becomes more prevalent, we have all noticed certain stylistic quirks that give away its machine-made origins. From repetitive phrasing to grammar that’s just a bit off, these hallmarks are increasingly recognisable.

The overuse of the m-dash is a classic example. Pre-AI, m-dashes mainly appeared in novels and non-fiction books, and rarely anywhere else: they are slightly harder to find on a keyboard than n-dashes, which may explain it. But these days, you can’t move for m-dashes, and every LinkedIn post seems to have at least one. While useful for adding emphasis, inserting a pause, or when you want more than a comma but less than a bracket, the excessive use of m-dashes can make writing feel overly dramatic or disjointed. 

Similarly, AI often leans heavily on adjectives, sometimes stacking them in a way that feels unnatural or redundant. A “remarkably innovative and highly efficient solution” might sound impressive at first pass but is also bloated and self-important.

Then there are the misplaced quotation marks, often used for emphasis rather than to denote actual speech or citation, which make text feel weirdly sarcastic. Certain turns of phrase crop up repeatedly in AI-generated content, like: “The point is not just this. It’s this other thing, too.” These stock phrases and sentence structures give writing a formulaic rhythm that lacks authenticity.

Of course, these quirks reflect the way large language models are trained - on vast datasets of human writing, which they mimic without truly understanding. They are guessing the most likely next word, rather than thinking about it. As a result, AI often produces content that is mostly (but by no means always) grammatically correct, though the overall tone is bland and standardised. 

But, whilst these hallmarks matter now, will they always? The Economist notes that, while AI can assist with tasks like summarising or translating content, the responsibility for nuance and accuracy still lies with humans. Good. But, in a world where AI is embedded in everything from email to journalism, how long before this line between human and machine-generated content is not just blurred, but dissolved? 

For now, at least, the desire for authenticity remains strong. AI tools are drafting emails, summarising conversations, and even offering writing coaching, but when readers realise that they are engaging with AI-generated content, especially when it hasn’t been disclosed, it can backfire. We are still, if not distrustful of AI-generated content, then mildly disdainful.

As publications like The Economist increasingly feel obliged to make clear, their journalism is produced by humans, with AI used only to support - not replace - the creative process. This reflects a broader truth: in the AI era, what sets businesses apart is not just efficiency, but empathy, originality, and human insight.

Though, perhaps, rather than hunting out the signs of AI in written content, we should reframe our thinking: LinkedIn posts are rarely groundbreaking stuff so maybe it’s OK to use AI to post about that convention you attended, so long as it’s done transparently and wisely. Let AI handle the repetitive tasks, while we focus on what humans do best: thinking critically, telling stories, and using language to connect with others.

There is another layer to this, too. Language is not static; grammar rules evolve, and so do stylistic norms. What we now see as awkward or overly enthusiastic in AI-generated writing might, over time, become standard. The frequent use of m-dashes, the clipped cadence of AI phrasing, even the formulaic transitions, could shape the next generation of written English. Maybe it’s already happening. And maybe that is not a flaw, but a feature of a language that has always been shaped by its users.