Connect with:
Saturday / April 27.

43 States Suing Meta, Claiming Algorithms Fuel Youth Mental Health Crisis

Are Meta Algorithms Fueling Youth Mental Health Crisis? Whistleblowers and 43 State Attorneys General Say Yes.
Share

Stephanie Hepburn is a writer in New Orleans. She is the editor in chief of #CrisisTalk. You can reach her at .​

This past October, a bipartisan coalition of attorneys general from dozens of states — including Arizona, California, Louisiana and New York — sued Meta in California’s federal court, accusing the company of deliberately designing Instagram and Facebook to be addictive to young users while testifying before Congress that the platforms are safe. Other states and the District of Columbia have filed similar suits. Since then, U.S. District Court Judge Yvonne Gonzalez Rogers has denied a motion by the tech giant and three other companies operating some of the most popular social media platforms — ByteDance’s TikTok, Google’s YouTube and Snap’s Snapchat — to dismiss claims that they run platforms addictive to children.

The complaint against Meta accuses the company of exploiting vulnerabilities in young people’s developing brains through algorithms “designed to capitalize on young users’ dopamine responses and create an addictive cycle of engagement.” Tens of thousands of internal documents leaked from Meta whistleblowers are the backbone of the lawsuit. 

Two years before attorneys general crossed partisan lines to file suit against Meta, the Wall Street Journal published its “Facebook Files” series — a collection of stories based on leaked documents from inside the tech giant — causing upheaval right before Facebook rebranded as Meta Platforms, Inc. on Oct. 28, 2021. Frances Haugen, a former Facebook product manager, provided the Wall Street Journal with internal documents illustrating the company knew the harm its platforms were doing to youth mental health, with one in five surveyed teens reporting that Instagram made them feel worse about themselves and girls in the U.S. and U.K. feeling worse than boys, respectively. One slide from an internal Instagram presentation titled “Teen Mental Health Deep Dive” said teens blamed Instagram for increases in rates of anxiety and depression among teens, with both boys and girls saying social comparison was the “number one” reason the platform was worse than others for mental health. 

Teens already struggling with mental health said Instagram made it worse, pointing to pressure to conform to social stereotypes, to match the money and body shapes of influencers, the need for validation (views, likes, followers), friendship conflicts, bullying, hate speech, over-sexualization of girls and inappropriate advertisements targeted to vulnerable groups.

Meta responded, saying the Wall Street Journal mischaracterized “internal Instagram research into teenagers and well-being.” Yet, a month prior, Instagram spokesperson Stephanie Otway exchanged messages with Adam Mosseri, the head of Instagram, about reporter Jeff Horwitz’s forthcoming story for The Journal “that essentially argues that IG’s design is inherently bad for teenage girls … arguments based on our own research so are difficult to rebut.” 

Otway told Mosseri she was “mostly worried about the fallout from the article that our own research confirmed what everyone has long suspected.”

On Oct. 5, 2021, Haugen testified before the U.S. Senate Commerce Committee that Meta’s leadership knows how to make Facebook and Instagram safer but chooses to “put their astronomical profits before people.” In her written testimony, she said Facebook’s “profit optimizing machine is generating self-harm and self-hate — especially for vulnerable groups, like teenage girls.” She called for congressional action.

Since then, a second whistleblower, Arturo Bejar — a former Facebook engineering director who worked to combat cyberbullying and, later, a consultant for Instagram — has come forward. Last fall, he testified before the Senate Judiciary Subcommittee on Privacy, Technology and the Law on social media and the teen mental health crisis, providing internal research and emails that reveal Meta knew of the harm to teens on its platforms.  

Bejar says the 2021 Bad Experiences and Encounters Framework survey, now an exhibit in New Mexico’s claim against Meta, can guide federal and state legislators on the transparency and data feedback loop needed to protect teens. “It’s intended to be very actionable,” Bejar told CrisisTalk.

He and other members on Instagram’s well-being team developed the survey, which included 237,923 participants of all ages, to be issued every six months. The findings were troubling — more than half of respondents had negative experiences on Instagram within the last seven days. The percentage was highest among users ages 16-17 (57.3%), 18-21 (55.7%), and 13-15 (54.1%). 

“I was trying to understand what harm people are experiencing and then realized just how extensive that was,” he said. 

The youngest participants, those ages 13-15, had the highest rates of adverse experiences, including bullying and negative comparison. They also reported seeing more self-harm content and nudity. Boys in this age range experienced more bullying on the app (14.4%), while girls had the highest rates of negative social comparison (27.4%).

When asked whether they experienced the issue “more than seven days” ago, rates increased, with 27% of 13 to 15-year-old respondents reporting they’d received unwanted sexual advances on Instagram. Unwanted advances and bullying often took place in direct messages, while users were more likely to see self-harm content in their feed and stories, the explore and search area of the app, or in someone’s profile.

Reporting was low. Only 1% reported the offending content and only 2% of those who reported were able to get the content removed from the platform.

Bejar sent the data to Meta leadership, pointing out that “the reporting process grossly understated misconduct on the site,” but said the reaction wasn’t constructive. “Sheryl Sandberg expressed empathy for my daughter [the teen received unwanted sexual advances on Instagram] but offered no concrete ideas or action,” he said in his written testimony. Mosseri responded by requesting a follow-up meeting. “Mark Zuckerberg never replied. That was unusual. It might have happened, but I don’t recall Mark ever not responding to me previously in numerous communications, either by email or by asking for an in-person meeting.”

Echoing Haugen’s concerns, he testified that Meta knew the percentage of teens experiencing harm but “were not acting on it.”

The complaint by 33 attorneys general alleges that Meta’s recommendation algorithm is harmful to young users’ mental health, keeping them engaged and drawing “unwitting users into rabbit holes of algorithmically curated material.” While social comparison and algorithms designed to keep users engaged can affect both adult and young users, the attorneys general highlight that teens are more vulnerable. 

In January, a little over a month after the filing, Meta announced it would add protections to give teens “more age-appropriate experiences” on its apps by starting to hide results related to suicide, self-harm and eating disorders, adding that someone posting about self-harm can be destigmatizing but also not “suitable for all young people.” The company said it would begin to remove self-harm and other age-inappropriate content from teens’ experience of Instagram and Facebook. “We already aim not to recommend this type of content to teens in places like reels and explore, and with these changes, we’ll no longer show it to teens in feed and stories, even if it’s shared by someone they follow,” Meta said in the blog post. 

In the past few years, Instagram has changed its default settings. New accounts for teens under 16 now default to private and are toggled to less sensitive content, though kids can change the settings. “Less sensitive by what measure,” said Bejar.

Despite these measures, what Bejar calls Meta’s “placebo for press and regulators,” he says much of Instagram’s harmful content remains on the app. In 2017, a U.K. teen, 14-year-old Molly Russell, died from what the North London coroner’s court ruled was “an act of self-harm whilst suffering from depression and the negative effects of online content.” Molly joined Instagram when she was 12, illustrating how easy it is for younger children to access the app. Bejar, who recently met with Molly’s father, Ian Russell, said “most of the content she saw is still up, getting recommended and distributed.” 

The teen had fallen down an algorithmic rabbit hole on suicide, self-harm and depression on Instagram and Pinterest. Coroner Andrew Walker said the algorithms resulted in binges of “images, video clips and text, some of which were selected and provided without Molly requesting them.” Walker also said some of the content romanticized acts of self-harm by young people or “sought to isolate and discourage discussion with those who may have been able to help.” 

Sen. Richard Blumenthal of Connecticut shared at a hearing on protecting children online that he and his team created a fake Instagram account as a 13-year-old interested in extreme dieting and eating disorders. Within a day, recommendations were exclusively filled with accounts promoting self-injury and eating disorders. “Instagram latched onto that teenager’s initial insecurities and pushed more content and recommendations, glorifying eating disorders,” he said at another hearing a month later. “That is how Instagram’s algorithm can push teens into darker and darker places.”

Social media platforms often have conflicting objectives for optimizing algorithms, including user satisfaction, content recommendations and advertisements. Meta highly relies on advertising, which generated over $131 billion in 2023, making up nearly 98% of the company’s revenue. 

Mosseri, the head of Instagram, wrote a blog post on the app’s algorithm, explaining that each part of Instagram — “feed, stories, explore, reels, search and more” — has its own algorithm. In a user’s feed, they’ll see “a mix of content from the accounts you’ve chosen to follow, recommended content from accounts we think you’ll enjoy and ads.” The algorithm also factors in a user’s preferred format. “If we notice you prefer photos, we’ll show you more photos. We call these ‘signals,’ and there are thousands of them.” (Feed is one part of the app where respondents to Bejar and the well-being teams’s 2021 survey were more likely to see self-harm content.)

Social media algorithms are notoriously black boxes. Bejar says a user doesn’t have to “like” or express interest for similar content to be recommended. He gives the hypothetical of a disturbing video where a skateboarder falls and is badly injured — a kid could watch in horror but the algorithm will just register that they watched. “The way these recommendation systems work is that if you spend time looking at something, it interprets that you are interested and then recommends more of it.” 

Mosseri wrote in his blog post on algorithms that users can tap “not interested” when the algorithm recommends content they don’t want to see but Bejar says these social media functions are often ineffective. A 2022 study by Mozilla, the nonprofit behind the Firefox search engine, reported the same to be true for YouTube, finding that “not interested” and “dislike” buttons barely worked. 

Bejar regularly hears from parents that kids pressing “not interested” has little effect, if any. “The buttons don’t do what you would hope they would do,” he said. “There’s no good way for you to say, ‘That’s not for me.’”

He says trying to crack the code on the Black box of social media algorithms is deliberately challenging. That’s why he suggests mental health leaders and legislators think of algorithms like they do car engines and federal emissions standards. “While the ins and outs are difficult to understand, the outcomes aren’t,” he said. 

Is it a social media company’s obligation to provide care or at least not harm young people? Bejar says yes to both. That’s why social media companies have teams to address these harms within their platforms and to help identify and provide resources for those experiencing mental health challenges. 

In the U.S., a bipartisan Senate bill called the Kids Online Safety Act emphasizes social media companies’ “duty of care” to children. Under the bill, the Federal Trade Commission could bring enforcement action against a company whose app causes young users “to obsessively use their platform to the detriment of their mental health or to financially exploit them.” But there is no companion bill in the House.

Bejar says his whistleblowing isn’t about censorship or hindering freedom of speech. “People should be able to post whatever a platform allows, but should that be recommended in bulk to a 13-year-old?” he asked. “There’s the rub — they don’t get a PG feed.” 

The problem, says Bejar, is Meta’s well-being teams have long been underresourced. Over the past five years, the company’s executives have internally expressed concern, including Nick Clegg, Meta’s president of global affairs. In 2021, he sent Meta leadership a proposal for “additional investment to strengthen our position on wellbeing across the company,” highlighting the increasing urgency amid “concerns about the impact of our products on young people’s mental health” from “politicians in the US, UK, EU and Australia.” 

“In the US, this was specifically raised with me by the Surgeon General, and is the subject of potential legal action from State AGs,” he said in an email to Meta leadership. “We have received numerous policymaker inquiries and hearing requests.” (The U.S. surgeon general, Dr. Vivek Murthy, has issued an advisory on social media and youth mental health.) 

Clegg told them of increasing concerns about the effects of social media during Covid “exacerbated by increased suicide ideation amongst teens during the pandemic as well as an uptick in actual suicides and other negative mental health outcomes.” He asked for 20 engineers at a minimum but, ideally, 84 for a well-being product strategy focused on problematic use, bullying, connections, suicide and self-injury across Facebook and Instagram. 

“Nick asked for 84 engineers out of 35,000,” said Bejar, “and the answer was no.”

Bejar says creating transparency and a data feedback loop for regulators wouldn’t be challenging. “Facebook and Instagram have very sophisticated infrastructures for surveying,” he said. Neither would be developing an effective “not interested” button and easier steps for reporting specific issues like unwanted advances. “It’s very easy to build, would take three months, and wouldn’t ​​affect their economic goals.” 

Discover more from #CrisisTalk

Subscribe now to keep reading and get access to the full archive.

Continue reading