So, should we be using AI to draft our award entries?
If you decide to use AI, be prepared to be found out. The IASC research showed that most judges think they can tell when AI has done the heavy lifting. In fact, 71% say they can spot an AI-written entry.
Generic phrasing and copy that feels lacklustre give the game away, and judges also notice patterns when they compare entries side by side, similar turns of phrase or sentence structure that suggest AI authorship. Perhaps this will change as AI tools become more sophisticated but, as things stand, most machine-generated entries are easy to identify.
Does it matter to judges?
Yes, it matters a lot. While 58% of judges are comfortable with AI being used for practical tasks like transcribing interviews or organising data, only half think it’s acceptable for AI to draft the final submission.
More importantly, if a judge suspects your entry was written by AI, the impact can be significant: 58% say they will be less confident the entry will be engaging; 21% admit they are less likely to read it thoroughly; and 42% say they’ll consciously award fewer marks. A third of judges even believe there should be consequences, ranging from asking for a rewrite to, in rare cases, disqualification.
Award organisers are more relaxed, with around 71% saying they are fine with AI-generated entries in principle, though 14% already check submissions for signs of automation, using tools like GPTZero or Originality.ai to check for the use of AI. Indeed, more than half of organisers say they would adopt tech filters if their platform supported them.
Still, businesses are time-poor. Using AI to write awards is efficient, surely?
Even if you’re tempted to let AI take the lead, there are practical issues. Generative AI can hallucinate, inventing facts or misrepresenting data.
This means you will need to verify every detail, going back to notes and source material to cross reference everything the AI tool has generated. Such meticulous data-combing may well eat up any time saved using AI to create a draft.
So, how can AI be used safely and usefully for award entries?
The IASC research supports the advice we give to our clients. Yes, use AI to support the award-writing process - to organise your notes, summarise transcripts, or check for typos. But using AI to write your entry from start to finish is risky. Judges are alert to it, and many will mark you down or even disqualify you.
But beyond that, an AI-written entry will lack the nuance and narrative strength that comes from human insight. AI tools can’t challenge weak content or find the angle that makes your story stand out. In an era where AI means awards judges are overwhelmed with cookie-cutter entries, originality matters more than ever.
Inevitably, AI will become a bigger part of how organisations produce first drafts of award entries (and everything else), but the opportunity for businesses is in knowing how to use these tools well and in moderation.
The first version of an entry might come from a large language model (LLM), but the crucial part is what happens next: questioning the gaps, challenging the assumptions, finding a new angle, highlighting what makes your entry unique. This is where human judgment matters, and where a good comms partner can add value, helping to develop a narrative, spot what’s missing, and shape a story that is true to your business, rather than one that reads like everyone else’s.