How do we protect the young from social media?
There are currently 11 bills proposed in Congress that cover Child Online Protection and Social Media Safety. A good number of these have been filed because other countries have begun to either pass or entertain similar legislation.
To assert that we need laws to protect the young from social media, we have to accept two premises.
The first is that social media platforms contain significant potential harms that require legislation.
The second is that legislation and regulation are the right tools to protect the young, and the government is the agent to enforce this.
There are many studies already establishing correlation between social media use and depression, anxiety, and other internalizing symptoms. But the point that social media platforms stress in defending themselves is that correlation does not establish causation.
So we are in a position where we can all intuit or sense that there’s something wrong with the way social media platforms run, but we are still asking, is there sufficient evidence for legislation?
And on the other end of this, if there are actual harms and we don’t do something now, what if we act too late to reverse, undo, or prevent those harms?
I think the other way to approach this is to look at what we have learned from social media platforms themselves.
An internal TikTok report acknowledged that “minors do not have executive mental function to control their screen time.” It’s also been documented that social media companies use the same tricks as casinos and gambling to keep people hooked.
We also know that social media platforms are spaces where children and young people face real, documented harms such as algorithmic manipulation, exploitation, and cyberbullying. Between the available studies — including a 2024 meta-analysis published in JAMA Pediatrics establishing correlation between social media use and internalizing symptoms across adolescent populations — and the unethical practices we already know these platforms engage in, I believe there’s more than enough reason to explore legislation.
We need to fix platforms, but not just for kids
I do have one major concern when we only target a specific age group for protection. Because of course we need to protect children, the young, and the vulnerable from the harms of social media.
But what happens when they turn 16 (which is the age limit set by several of the proposed bills)? If social media is a mess that radicalizes, makes people dislike themselves, and actively causes people harm, then why aren’t we trying to legislate and hold social media accountable for all?
That’s my first challenge to those advancing these bills. Let’s start with the kids, but more importantly, we need to fix platforms so that they are actually good spaces to inhabit. Imagine if it were a physical space that was unsafe.
You wouldn’t just restrict young people from entering that space, you would demand that the space be fixed or demolished. Well, this is a space that shapes our minds. We should be even more demanding.
We need to know exactly where the interventions lie
I think one of the challenges facing any legislation is identifying where exactly the levers are. This is especially challenging with the many different kinds of platforms in play and how many of these are conflated in the many bills.
First off, in terms of parents and social media platforms sharing responsibility, we can ask, should we explore a “kids” mode? This would be similar to how parents can have a kid mode for their streaming services like Netflix or YouTube. And if there is such a mode, what age is appropriate for that kind of mode?
One thing I do believe is enforceable is that social media platforms such as Instagram and Facebook should not be communication channels for students. Other platforms like Discord or Viber may fill that gap, but if we are to implement restrictions on social media usage, schools would need to identify and accredit specific platforms for student communication — similar to how companies designate internal comms tools like Slack. The choice of communication platforms should be accredited by the relevant government entities.
Media Literacy and maintaining safe spaces
Perhaps the biggest concern for me is that we need to be equally focused on Media Literacy, AI Literacy, and critical thinking and engagement with the online world. I think it’s easy to spot that the world online has gotten a lot more dangerous, contentious, and problematic than where we were when people only started using social networking in the mid to late ’00s. Increasing literacy and awareness has to be a crucial — and well-funded — component of any legislation.
Carl is a writer, author, certified Human-Centered Coach, and creative entrepreneur. He is currently the Executive Director of Data and AI Ethics PH.

