Skip to main content

UK curriculum takes bold step to tackle new era of misinformation

fact / fake red dice with white writing and yellow background
By Matt Redley
04 December 2025
Strategy & Corporate Communications
education policy
uk government
News

Schools, parents and policymakers are all grappling with a question that has become urgenthow do we help the next generation navigate a world awash with AI-generated disinformation?

The UK government’s answer is bold. From 2028, primary school pupils in England will learn how to identify fake news, misinformation and disinformation, including content generated by artificial intelligence. This initiative forms part of the Department for Education’s Plan for Change, designed to equip children with critical thinking skills and digital resilience.

Why is this so important? Because false narratives travel faster than truth, a reality made even more acute with AI. As the old saying goes, “a lie can travel around the world and back again while the truth is lacing up its boots”. Research from the National Literacy Trust shows that one in four children leave primary school without strong reading skills, making them more vulnerable to online manipulation. Embedding media literacy into the curriculum is a key step in safeguarding critical thinking for future generations.

Sahil Shah, Founder of Say No to Disinfo, which mines peer-reviewed empirical studies for insight on the most effective ways to combat information threats, underlines this point: “AI turbocharges mis and disinformation, making it more effective and quicker, cheaper and easier to create and spread. Children that grow up as digital natives in a world where human generated and machine generated content are indistinguishable are even more in need of guidance on how to navigate the information environment.”

Teaching children to question sources, verify information and understand the mechanics of deepfakes will not only protect them personally but also foster a culture of accountability. Over time, these skills will create a more informed public, better equipped to challenge misleading narratives and uphold transparency.

For corporate Britain, this matters too. Communicators have long understood the corrosive impact of disinformation on corporate reputation, and the threat has been supercharged by AI. A single deepfake or viral falsehood can erode trust, hit share prices, and trigger regulatory scrutiny. Last year, half of all US businesses were targeted by deepfake attacks, a statistic that should make every board sit up.

“The nature of corporate reputation risk in 2025 is incomparable to what it was in 2020, and the businesses that choose to prep for it are more likely to survive,” Sahil goes on to say. “Our research showed that a lone wolf can now run a disinformation campaign that only a nation state could have five years ago, and that for £10 in ad spend, they could move £10 million in customer deposits at a bank. Understanding, preparing for, and actively managing this threat is critical for survival from this modern-day Sword of Damocles hanging over corporates’ heads.”

For communicators and business leaders alike, this change to the curriculum is a welcome development, and a step towards a future where critical thinking is in the arsenal of the next generation.