Are social media bans for children enough to keep them safe online?
Following Australia’s recent ban on social media for under-16s, a wave of countries has followed suit, either announcing the future introduction of bans or launching investigations into potential restrictions for children. We understand that children aren’t equipped to navigate social media by themselves and we are no longer leaving it up to social media platforms to self-regulate usage.
In the UK many have welcomed the government’s announcement of a three-month public consultation into children’s use of social media including addictive features like ‘doomscrolling’ and their effects on mental health and safety. All signs point towards a future total ban for under-16s, with the Conservatives saying they would introduce the ban without consultation if they were in power.
But are bans enough to keep kids safe online? Probably not.
As some results in Australia have shown, whilst some kids have embraced the ban and all its benefits, this generation - who grew up with the internet and touchscreens around them from birth - is savvy. They know how to circumvent age restrictions or use VPNs to access now forbidden sites and are still accessing the same content as before. What’s different now is that some are hiding their usage from parents, whilst others are spending even longer on the platforms that don’t fall under the ban (in Australia, apps including WhatsApp, Facebook Messenger, Discord and Roblox are still allowed to be used by under-16s).
Could this surreptitious use, coupled with advances in AI technology, especially image generation, leave parents and lawmakers with a problem that is even harder to regulate than before?
The recent launch of investigations into Elon Musk’s Grok AI tool for creation of sexualised deepfake images of real people, underlines just how quickly technology is evolving - in this instance, for bad. The EU Watchdog’s investigation alleges some nonconsensual AI-generated images included children, and Spanish prosecutors are investigating X and other platforms for the creation and proliferation of AI-generated child sexual abuse material.
It’s clear that the online world is evolving faster than the laws we’re developing to protect people, including children. So, the question is how can we help policymakers to match the speed at which AI and other tech develop to ensure adequate protections are in place before harm is done?