The debate unfolding in Greece over a possible ban on social media for children under 16 is not merely a technical regulatory matter. It is a profound political choice, one with serious implications for rights, privacy, competition, and the very relationship young people have with the digital world.
The intention is understandable: protect minors from genuine online harms. Few would dispute that children face real risks online, from cyberbullying and harmful content to predatory design and addictive platform mechanics. But good intentions do not automatically make for good policy. Experience from other jurisdictions suggests that broad age-based bans rarely achieve their stated goal. More often, they create new problems, while strengthening precisely the forces they claim to restrain.
The logic behind an outright ban is deceptively simple: if we close the doors of the major platforms, children will be safer. Yet that assumption collapses under even modest scrutiny. Australia’s experimentation with strict age-assurance proposals, and the United Kingdom’s long-running debates around online safety legislation, have already shown the two dangerous directions such measures tend to produce: mass surveillance or technological loopholes.
Age verification is often presented as a neutral technical process. It is not. To prove someone is over 16, systems typically require some form of personal data — an ID card, biometric information, facial estimation, or some form of third-party credential. That means creating or relying on vast systems of digital identification, including data about minors. These databases become prime targets for leaks, abuse, and cyberattacks. In a world where even giants such as Meta and Google have faced repeated scrutiny over data handling, the idea that governments or platforms can create a “safe” system of mass age verification for children is, at best, optimistic. At worst, it is dangerously naive.
The privacy risks do not stop at age checks. Across Europe and beyond, the language of “child protection” has increasingly been used to justify proposals that would weaken encryption, whether by scanning private messages, mandating content detection in encrypted services, or creating exceptional-access mechanisms, the euphemism for what technologists know as backdoors.
This is one of the most troubling aspects of the broader global trend. End-to-end encryption is not a privilege for criminals. It is a basic security tool for journalists, dissidents, human rights defenders, businesses, public officials, and ordinary citizens. It protects children, too. Once a system creates a “back door” for the good guys, it creates a vulnerability for everyone else. The history of digital security is full of examples where such exceptions quickly become systemic weaknesses, exploited by cybercriminals, hostile states, or authoritarian governments.
But the most important issue is not technical. It is constitutional, democratic, and moral. Children are not merely passive objects of protection; they are subjects of rights. Under the UN Convention on the Rights of the Child, minors enjoy rights to freedom of expression, access to information, association, and participation in cultural and public life. In the digital age, social media platforms are not simply entertainment spaces. They are central arenas where many of these rights are exercised.
This is precisely why the warning issued by Council of Europe Commissioner for Human Rights Michael O’Flaherty should be taken seriously. Blanket bans on children’s access to social media, he has argued, risk undermining fundamental rights while proving both disproportionate and ineffective. His intervention matters because it reframes the debate. The question is not whether children need protection online – of course, they do — but whether states can lawfully and wisely restrict access to modern public spaces without undermining the rights of young people themselves.
This concern becomes even more urgent when we consider children from marginalized or isolated communities. For many young people, especially those in rural, conservative, or socially hostile environments, social media is not merely a source of distraction. It is a critical source of belonging. This is particularly true for LGBTQI+ teenagers, who often turn to platforms such as TikTok or Instagram to find peers, support networks, community organizations, and language to understand their own identity. For some, online communities provide the first safe space where they can ask questions, seek help, or simply feel less alone. A broad ban risks isolating precisely those children who most need connection, solidarity, and emotional support.
Even if one sets aside the rights concerns, there remains the practical question: would it work? History suggests otherwise. Restrictive policies rarely eliminate demand; they simply displace it. Just as Prohibition in the United States did not end alcohol consumption but drove it underground, a digital ban would not stop children from seeking access to social media. It would push them toward less visible, less accountable, and less regulated spaces: obscure apps, foreign forums, adult-owned accounts, falsified credentials, or VPNs. The result would not be less risk. It would be less transparency, less accountability, and fewer opportunities for intervention.
There is also an irony here that policymakers should not ignore. Measures supposedly designed to curb the power of large technology companies may in fact reinforce it. The biggest platforms have the financial, legal, and technical resources to build sophisticated age-verification systems, absorb compliance costs, and negotiate with regulators. Smaller platforms, start-ups, and alternative digital services do not. That means regulatory burdens can easily become barriers to entry, consolidating the dominance of the very incumbents governments claim to challenge. Instead of taming Big Tech, we risk turning the largest firms into indispensable gatekeepers of digital identity and age assurance.
What real child protection looks like
At its core, the under-16 ban is a symptom of a broader policy failure. It targets the visible symptom rather than the underlying causes: inadequate digital literacy, weak platform accountability, poor content moderation, opaque recommendation systems, and business models built around engagement maximization, emotional manipulation, and addictive design. If we do not change the incentives that shape the online environment, then we are not solving the problem. We are simply relocating it.
Children are not endangered merely because they are online. They are endangered because many platforms are built to exploit attention, amplify outrage, and privilege virality over well-being. That is where regulation should focus.
If Greece is serious about protecting children online, the solution begins much earlier, in the classroom. Digital education cannot be a one-off seminar in secondary school. It must be integrated into the curriculum from the earliest grades: what algorithms are, how advertising works, what privacy means, how misinformation spreads, how to recognize manipulation, how to respond to cyberbullying, and how to navigate digital spaces critically and safely. Children who understand the systems around them are far more resilient than children who are simply locked out of them.
Equally important is the role of parents. Many parents understandably feel overwhelmed by the speed of technological change. Governments can help by investing in parent education, practical support tools, public-awareness campaigns, and accessible guidance. Trust and communication at home remain more effective than any blanket filter. When a child knows they can talk openly about a disturbing online experience without fear of punishment, the risk is meaningfully reduced. Fear-driven prohibition, by contrast, often drives secrecy.
A ban on social media for children under 16 offers an easy and politically marketable message: we did something. But protecting children is not about symbolic gestures. It requires long-term investment in education, institutional transparency, platform accountability, safer design standards, and enforceable obligations on companies whose business models profit from harm.
Rather than locking children out of the internet, we should equip them to navigate it safely — and compel platforms to stop designing digital environments that prey on vulnerability. Otherwise, Greece risks repeating a familiar mistake: confronting a complex social transformation with a simplistic legal patch. And patches, however well-intentioned, rarely survive contact with reality.
Dr. Konstantinos Komaitis is a resident senior fellow with the Democracy and Technology Initiative at the Atlantic Council, leading the work on digital governance and democracy.


