Business

Big Tech loses bid to toss lawsuits alleging social media platforms harmed children

A California judge on Tuesday rejected efforts by social media companies to toss nationwide lawsuits accusing the Big Tech behemoths of illegally enticing and then getting millions of kids hooked on their platforms – damaging their mental health.

US District Judge Yvonne Gonzalez Rogers in Oakland ruled against Alphabet, which operates Google and YouTube; Meta Platforms, which operates Facebook and Instagram; ByteDance, which operates TikTok; and Snap, which operates Snapchat.

The decision covers hundreds of lawsuits filed on behalf of individual children who allegedly suffered negative physical, mental and emotional health effects from social media use including anxiety, depression, and occasionally suicide.

“Today’s decision is a significant victory for the families that have been harmed by the dangers of social media,” the plaintiffs’ lead lawyers – Lexi Hazam, Previn Warren and Chris Seeger – said in a joint statement.

Facebook and Google apps
The judge ruled against Alphabet, which operates Google and YouTube; Meta Platforms, which operates Facebook; ByteDance, which operates TikTok; and Snap, which operates Snapchat.
AFP via Getty Images

The litigation seeks, among other remedies, damages and a halt to the defendants’ alleged wrongful practices.

More than 140 school districts have filed similar lawsuits against the industry, and 42 states plus the District of Columbia last month sued Meta for youth addiction to its social media platforms.

Google spokesperson José Castañeda told The Post that “the allegations in these complaints are simply not true.”

 “Protecting kids across our platforms has always been core to our work. In collaboration with child development specialists, we have built age-appropriate experiences for kids and families on YouTube, and provide parents with robust controls,” Castañeda added.

The Post has also sought comment from Alphabet, Meta, ByteDance and Snap.

Rogers shot down the companies’ claim that they are protected under Section 230, a provision of the federal Communications Decency Act that shields internet companies from third-party actions.

“Nothing in Section 230 or existing case law indicates that Section 230 only applies to publishing where a defendant’s only intent is to convey or curate information,” Rogers said in her 52-page ruling. “To hold otherwise would essentially be to hold that any website that generates revenue by maintaining the interest of users and publishes content with the intent of meeting this goal, would no longer be entitled to Section 230 immunity.”

Rogers noted that the plaintiffs’ claims were broader than just focusing on third-party content, and the defendants did not address why they should not be liable for providing defective parental controls.

She pointed to features of popular social media designed to “enable coercive, predatory behavior toward children,” such as with “limitations on content length,” “notifications…to draw them back to their respective platforms” and “engagement-based algorithms.”

Rogers continued: “Defendants either do not require users to enter their age upon sign-up or do not have effective age-verification for users, even though such verification technology is readily available and, in some instances, used by defendants in other contexts.”

She specifically called out Facebook, which “purports not to allow children under 13,” but only “relies on a user’s self-reported age when they sign up for the platform.”

Should an under-13-year-old input their birthdate, they will be blocked from completing the registration process. 

“However, immediately thereafter, the platform permits them to re-complete the sign-up form, enter an earlier birthday (even if it does not accurately reflect their age), and create an account,” Rogers said.

The judge, though, dismissed some claims that the defendants’ platforms were defectively designed.

With Post wires