Supporters of these proposals argue that strict age limits could protect young people from cyberbullying, violent content, and child sexual abuse material, as well as from the intrusive data collection and addictive design features embedded in platforms. Critics, however, argue that blanket bans risk treating the symptom rather than the cause, failing to address the deeper structural incentives that produce these harms.
In December 2025, Australia responded to these concerns by becoming the first country in the world to ban youth under sixteen from accessing large social media platforms. In January 2026, France passed a bill preventing children under fifteen from accessing social media. Similar proposals are now under consideration in several other EU countries.
Meanwhile, the European Parliament has proposed a unified minimum age requirement for social media across the bloc. The proposal suggests a minimum age of sixteen, with provisions for parental consent for those between thirteen and fifteen.
“Rather than imposing higher age thresholds across all social media platforms, targeted measures could be considered in higher-risk environments, such as adult content platforms, where stronger safeguards may be justified to protect minors.”
(Anna-Liisa Pärnalaas)
Proponents argue that a hard line would provide stronger tools to reduce exposure to online harms.
But not all EU countries support this approach. Anna-Liisa Pärnalaas, Head of Strategy for Digital Development at the Ministry of Justice and Digital Affairs, told Eesti Elu that Estonia supports a more nuanced approach to addressing online harms: “Estonia currently does not support introducing additional EU-wide restrictions or bans on minors’ access to social media… Rather than imposing higher age thresholds across all social media platforms, targeted measures could be considered in higher-risk environments, such as adult content platforms, where stronger safeguards may be justified to protect minors.”
While blanket bans may be politically appealing, they fail to address the structural factors that produce harm. Platforms rely on user engagement to generate profit. To maximize engagement, they deploy user profiling and addictive design features intended to keep users scrolling for as long as possible.
Pärnalaas also notes that enforcing a blanket ban may be difficult. “Such measures can be easily circumvented, for example using VPNs,” she explained. “Banning widely used social media platforms also carries the risk that users will migrate to lesser-known environments that are often less regulated and may expose young people to even greater risks. In addition, some applications can be used without creating or logging into an account, which increases the risk that users may be exposed to content that is not age appropriate.”
Estonia has successfully built a digital society through education and inclusion. Instead of excluding youth from online spaces, the country believes in providing them with the resources to safely navigate them. “Young people have the right to engage in digital society, and their participation should occur within a safe and supportive online environment,” said Pärnalaas. “…Estonia also considers it essential to raise digital awareness among children and parents alike and we put great emphasis on this. By empowering children through consistent, age‑appropriate education and nurturing their critical thinking skills, we can ensure they have the knowledge and confidence to navigate the digital world safely.”
There are no settled benchmarks defined under each systemic risk, meaning that VLOPs have broad discretion to address them as they see fit. Resolving these issues may prove more effective than introducing sweeping bans that attempt to substitute for regulatory oversight.
One approach to addressing online harm may be to improve the enforcement of existing regulations, according to Pärnalaas. The EU’s Digital Services Act (DSA), for example, requires very large online platforms (VLOPs) to mitigate systemic risks, including those to minors. However, enforcement gaps remain. There are no settled benchmarks defined under each systemic risk, meaning that VLOPs have broad discretion to address them as they see fit. Resolving these issues may prove more effective than introducing sweeping bans that attempt to substitute for regulatory oversight.
“We expect the discussions to continue and to increasingly focus on research-based, balanced, and effective measures. We are looking forward to the action plan that the President of the Commission is planning to launch in the summer,” said Pärnalaas.
In Canada, calls for a similar ban have also been growing.
However, Canada’s regulatory context stands in stark contrast to the EU's, as it lacks comprehensive online harms regulation. Without any relevant enforcement infrastructure in place, a blanket ban would be difficult to implement.
The government is reportedly preparing a renewed online harms bill, although it has not confirmed whether the proposal will include such bans. Hermine Landry, press secretary and senior communications advisor to Minister of Canadian Identity and Culture Marc Miller, said the government is considering various options:
“We all want our children to be safe as they navigate the digital world, and platforms have an important role to play in meeting that challenge. Our government intends to act swiftly to better protect Canadians, especially children, from online harm. No decisions have been made and we will have more details to share in due course.”
Although a social media ban may be too extreme an instrument, the logic behind it has not emerged in a vacuum. Online platforms are causing significant harm to users. In Canada, the debate over online harms regulation is no longer a question of if, but when. Until strong rules are in place, the country’s legal landscape remains poorly equipped to address the complex risks that digital platforms pose to society.
This article was written by Natalie Jenkins as part of the Local Journalist Initiative.