AI: The new climate crisis we’re not talking about

AI used to be the domain of overcaffeinated futurists and Silicon Valley slide decks, a glossy mirage hovering somewhere between Elon Musk’s Twitter feed and a Star Trek rerun. Fast forward to today, and AI is no longer lurking in the future; it’s comfortably sprawled across our present, quietly rewriting the rules of work, communication, and decision-making while we’re busy asking it to write our emails and plan our holidays. But beneath the shiny veneer of convenience lies a less glamorous truth: AI isn’t just a technological marvel. It’s an environmental, ethical, and societal curveball, and much like climate change, it demands urgent, coordinated action.
On a recent episode of the Diary of a CEO podcast, Mo Gawdat, former Chief Business Officer at Google X, issued a stark warning: AI could usher in 15 years of disruption, dislocation, and despair. He speaks not of machines turning malevolent, but of human systems failing to manage the pace and impact of change. His message is clear, the threat is not AI itself, but our inability to govern it wisely. To quote Isaac Asimov “The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.” This is not hyperbole, AI is already starting to reshape the job market. According to CBS News joblistings for entry-level corporate roles in the US have declined 15% over the past year, especially affecting recent college graduates. The UK is no picnic either, entry-level job vacancies have dropped by 31.9% since the launch of ChatGPT in November 2022.
Gawdat’s own startup runs on AI with just three employees, rather than the hundreds of developers it would have needed in the past. If AI were a corporate restructure, it would be the most ruthless one in history.
Even if ESG is not fashionable anymore, environmental, social and governance frameworks in some form, have become central to corporate strategy and corporate communication in the last decade. But AI, one of the most transformative forces of our time, is often still treated as a technical issue, not an ESG one. That should change. We need to embed AI into ESG style reporting, risk assessments, and stakeholder engagement. Just as companies disclose their carbon footprint, they should disclose their AI footprint including energy usage, emissions, hardware waste, ethical safeguards, social impact, job displacement, mental health, misinformation, bias, transparency, accountability the list goes on…
In the face of a potentially very turbulent period of social and political re-ordering and unrest, the new frontier for corporate communications will be how organisations use, and talk about, AI. Recent messaging from some large tech firms has leaned into the narrative that job losses are a sign of progress. “Efficiency,” they say, “is the future.” But when that efficiency comes at the cost of livelihoods, purpose, and social cohesion, it’s worth asking: is this the story we want to tell?
Short-term messaging that celebrates displacement may play well with shareholders, but it risks alienating the very stakeholders companies rely on such as employees, customers, regulators, and yes, even politicians. Jobs, after all, have long been the cornerstone of political campaigns - the promise of more, better, and safer employment is the currency of electoral success. But when AI begins to nibble away not just at entry-level roles but at the desks of senior executives and elected officials, the narrative becomes harder to spin.
It’s difficult to champion “growth” when the growth in question is a server farm replacing a workforce. Messaging that celebrates job losses as progress may play well in quarterly earnings calls, but it risks becoming a political liability when the electorate starts asking, “Who’s next?”
Gawdat, for all his warnings, isn’t even the most pessimistic voice in the room. Geoffrey Hinton, Nobel laureate and often dubbed the “godfather of AI”, along with Yoshua Bengio, Turing Award winner, and the CEOs of OpenAI, Anthropic, and Google DeepMind, have all signed an open letter stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” When the people building AI start talking about it in the same sentence as the end of civilisation, it’s probably time to start calling large scale deterioration of the of the job market a feature, not a bug.
As strategic communications advisors our job is to help organisations think and communicate clearly when the stakes are high and AI raises the stakes. We have a responsibility to challenge complacency, and to help organisations navigate this new frontier with clarity and conscience. If we fail to act, we risk repeating the mistakes of the climate crisis: denial, delay, and division. That means educating stakeholders about AI’s environmental, social and ethical implications.
This shift was highlighted in SEC Newgate’s PR2030: The Future of Global Communications report which calls for a reimagining of how communicators engage with emerging technologies like AI, not just as tools, but as forces shaping societal trust, transparency, and resilience. The future of corporate reputation will, to a large extent, hinge on how well we navigate the ethical and societal implications of innovation. AI isn’t just a disruptor; it’s a litmus test for corporate conscience.