Subscribe Menu

As Online Harms Spread, Should Children Be Banned from Social Media?

As concerns about the harms of online activity grow, governments around the world are debating whether children should be banned from social media entirely.

Two boys using their phones, from pexels.com
Two boys using their phones, from pexels.com

​Supporters of these proposals argue that strict age limits could protect young people from cyberbullying, violent content, and child sexual abuse material, as well as from the intrusive data collection and addictive design features embedded in platforms. Critics, however, argue that blanket bans risk treating the symptom rather than the cause, failing to address the deeper structural incentives that produce many of these harms.

​In December 2025, Australia responded to these concerns by becoming the first country in the world to ban youth under sixteen from accessing large social media platforms. In January 2026, France passed a bill preventing children under fifteen from accessing social media. Similar proposals are now under consideration in several other EU countries.

“A digital age of majority may be a ‘radical move’ but it is needed when we look at the numbers of our children's well-being”

(Caroline Stage Olsen, Denmark’s minister for digital government)

​Meanwhile, the European Parliament has proposed a unified minimum age requirement for social media across the bloc. The proposal would ban those under sixteen from accessing social media entirely, while allowing access for users aged thirteen to sixteen with parental consent.

​Proponents argue that a hard line would provide stronger tools to reduce exposure to online harms.

Denmark’s minister for digital government, Caroline Stage Olsen, said that “A digital age of majority may be a ‘radical move’ but it is needed when we look at the numbers of our children's well-being” (source: POLITICO).

But not all EU countries support this approach. Recent reporting shows Estonia is against such proposals: “Estonia believes in an information society and including young people in the information society,” Estonia’s minister of justice and digital affairs, Liisa-Ly Pakosta, told POLITICO.

While blanket bans may be politically appealing, they fail to address the structural factors that produce harm. Platforms rely on user engagement to generate profit. To maximize engagement, they deploy user profiling and addictive design features intended to keep users scrolling for as long as possible. While a blanket ban may temporarily reduce exposure to online harms, it does little to address the underlying economic incentives that produce them.

Experts also note that enforcement would be difficult. Age verification systems or geoblocks can be bypassed by virtual private networks (VPNs), while also posing new risks to users’ privacy. Silkie Carlo, Director of civil liberties group Big Brother Watch, told the Daily Mail that banning children from social media would likely require intrusive identity checks. “The only way to ban children from social media is through mandatory online ID checks for us all, adults and children alike. [These] are highly invasive and the biometric and behavioural profiling options are highly inaccurate, meaning IDs will be required in many millions of cases regardless.”

Instead of excluding youth from online spaces, experts argue that platforms should be required to design safer online environments. Professors Kaitlynn Mendes and Christopher Dietzel wrote in the Toronto Star that, as citizens, young people have a right to safely participate in the digital public sphere. The burden of responsibility should lie on platforms to ensure their products are safe to use.

There are no settled benchmarks defined under each systemic risk, meaning that VLOPs have broad discretion to address them as they see fit. Resolving these issues may prove more effective than introducing sweeping bans that attempt to substitute for regulatory oversight.

One approach to addressing online harm may be to improve the enforcement of existing regulations. The EU’s Digital Services Act (DSA), for example, requires very large online platforms (VLOPs) to mitigate systemic risks, including those to minors. However, enforcement gaps remain. There are no settled benchmarks defined under each systemic risk, meaning that VLOPs have broad discretion to address them as they see fit. Resolving these issues may prove more effective than introducing sweeping bans that attempt to substitute for regulatory oversight.

In Canada, calls for a similar ban have also been growing.

However, Canada’s regulatory context stands in stark contrast to the EU's, as it lacks comprehensive online harms regulation. Without any relevant enforcement infrastructure in place, a blanket ban would be difficult to implement.

Recent reporting indicates that the government is preparing a renewed online harms bill, although it has not confirmed whether the proposal will include such bans. Hermine Landry, press secretary and senior communications advisor to Minister of Canadian Identity and Culture Marc Miller, said the government is considering various options:

“We all want our children to be safe as they navigate the digital world, and platforms have an important role to play in meeting that challenge. Our government intends to act swiftly to better protect Canadians, especially children, from online harm. No decisions have been made and we will have more details to share in due course.”

Although a social media ban may be too extreme an instrument, the logic behind it has not emerged in a vacuum. Online platforms are causing significant harm to users. In Canada, the debate over online harms regulation is no longer a question of if, but when. Until strong rules are in place, the country’s legal landscape remains poorly equipped to address the complex risks that digital platforms pose to society.

This article was written by Natalie Jenkins as part of the Local Journalist Initiative.

Read more