Connect with:
Saturday / May 9.

California Jury Finds Meta and Google Liable in Social Media Addiction Case. What Are The Implications for Lawsuits Against AI? — Ep 9

Matthew Bergman
Share
Stephanie Hepburn

Stephanie Hepburn is a writer in New Orleans. She is the editor in chief of CrisisTalk. You can reach her at editor@crisisnow.com.​

Google and Meta were found liable for the plaintiff’s social media addiction to Instagram and YouTube as a child. A conversation with Matthew Bergman, one of the plaintiff’s attorneys, on how the outcome sets precedent for lawsuits against AI chatbot companies.

Transcript

Matthew Bergman: The similarities are platforms seeking to hook people for their economic gain. With respect to social media, it is an attention economy. It is geared toward maximizing the amount of attention that the individual, usually the child in our cases, engages with the platform. Basically, the amount of time that they devote to the platform. The chatbots focus not on an attention economy, but on an intimacy economy, on the person’s formation of emotional bonds with the online chatbot that emulate human relationships.

Stephanie Hepburn: This is Crisis Talk. I’m your host, Stephanie Hepburn. Today, attorney Matthew Bergman joins me. In March, his legal team won a social media addiction case in Los Angeles against Meta and Google. We talk about how this case turns on design, not content, and the implications this has for cases against AI companies. Let’s jump in.

Matthew Bergman: My name is Matthew Bergman. I’m the founding uh attorney of Social Media Victims Law Center.

Stephanie Hepburn: So last month, the California jury found Meta and Google liable for plaintiff Kaylee GM’s social media addiction to Instagram and YouTube as a child, which the jury found to be a substantial factor in her depression, anxiety, and body dysmorphia. Can you tell me why this is such a landmark case?

Matthew Bergman: Yes, this was the first ever jury finding where a social media company had been held accountable for causing mental health harms to young people, and not only through their negligence, but the jury found through their malice. This was an inflection point in the effort that our firm has led over the last four years to hold social media companies accountable for the carnage that their platforms are inflicting on young people in the United States and throughout the world.

Stephanie Hepburn: This is not the first case against these large social media companies, yet there’s something different in this case that seems to focus on design as opposed to content. Can you talk a little bit about those nuances?

Matthew Bergman: For decades, social media companies have exploited Section 230 of the Communications Decency Act to avoid any accountability for the known harms that their platforms are inflicting on young people. They have been able until recently to avoid even being sued for these types of harms. We adopted a different approach four years ago that we thought might be effective in holding them accountable. And that is suing them not for the content that they host on their platforms, but rather for the addictive and dangerous design that they have intentionally included in their platforms to addict young children and maintain their engagement, not by showing them what they want to see, but what they can’t look away from.

Stephanie Hepburn: And when you talk about the design, does that mean algorithms, advertisements? What does that mean?

Matthew Bergman: There are specific design features with respect to each of these platforms that are unrelated to content that are dangerous and we believe give rise to, and the jury actually found gave rise to, legal culpability. And you mentioned the first, which is the algorithm. The algorithm is not simply a curating device to show people what they want to see. It utilizes highly sophisticated AI and exploits the neurologic vulnerabilities of young people, their need for social acclamation, and their impending pubescent experiences to addict them to the platform. Again, not by showing them what they want to see, but what they can’t look away from, not by providing them with a quality online experience, but rather maintaining their engagement as much as humanly possible. And the reason for that is simple. When a kid is on social media, they are not the customer, they’re the product. The social media companies sell advertising to put in front of that kid’s eyes. The more time their eyes are on the screen, the more advertising they see, the more money the companies make.

Stephanie Hepburn: You mentioned that the social media companies often talk about Section 230. They also often raise the First Amendment. Can you flesh out a little bit about what that means? I was reading that oftentimes there’s a reliance on a case, Xeron versus AOL, which is a 1997 case. Things have changed. Can you talk a little bit about those two defenses?

Matthew Bergman: Yeah. First, the Communications Decency Act was passed in 1996 when Netscape was the largest browser. And unfortunately, through the Zoran decision and others, it has been expanded to confer liability protection on social media companies that Congress never intended. So before we started doing this work, courts were routinely throwing out cases where children were sexually abused on social media, where they were bought and sold as child prostitutes, where there was drug distribution going on online. And so that was very successfully exploited by the social media companies to say that anytime third-party content is involved in the plaintiff’s harm, they are immunized from liability. And let’s just be clear here. Every other company in America has a duty of reasonable care, except for social media companies. Up until we started doing this work, they were getting off scot-free, no matter what they knew their platforms were doing to young people. So we adopted a different approach. We focused on the design, not the content. We pointed to, in addition to algorithms, specific features such as endless scrolls, such as likes, streaks, geolocation, and other attributes of these platforms that were dangerously defective and provided veritable hunting grounds for adult predators to abuse young children.

Stephanie Hepburn: What about the First Amendment defense?

Matthew Bergman: The argument is that, you know, somehow there’s a First Amendment right to send suicidal videos to kids as a proponent of free speech to the core of my being, I find that argument fatuous. But the argument is that anytime you seek to limit or curtail the content that is distributed online, that’s ipso facto violative of the First Amendment. Of course, that begs a question of whether AI-generated algorithmic responses are even speech, if there’s no human cognition involved in that. Secondly, there’s the issue of is it speech or is it conduct? Just because something involves speech does not render it immune from suit. Somebody can defame somebody, that is speech. That is nevertheless actionable. An employer who creates a hostile working environment by subjecting employees to racist or sexist diatribes, again, is participating in speech, but no one would suggest that individual or that company should be immune from suit. So we think that the First Amendment argument does not insulate social media companies from the known consequences of their deliberate design decisions that have nothing to do with the exchange of ideas and exercise of editorial judgment.

Stephanie Hepburn: The notion that you’re pushing back against is this idea that algorithms in terms of AI, for example, are protected speech.

Matthew Bergman: That is among the First Amendment arguments that we confront in one that we believe to be unfounded.

Stephanie Hepburn: AI companies are invoking both 230 and the First Amendment. Why does the social media case matter in creating precedent?

Matthew Bergman: Because it focuses on the conduct, not the content. It focuses on the design of the algorithm. If an algorithm is designed to be addictive, it doesn’t matter what content is selected to put in front of the kid’s face, as long as that content is what triggers an addictive dopamine cycle. It is the content that the algorithm selects without any intervening human editorial judgment.

Stephanie Hepburn: This case is considered a bellwether case. What does that mean exactly?

Matthew Bergman: In the California cases, there is a consolidated proceeding involving several thousand cases involving young people harmed by social media. And the court picks out a number of cases which it references as bellwethers to take to trial to provide the parties with an understanding as to the factors that impact liability and damages, with the hope that that kind of information can elicit a global settlement. And so this was the first bellwether case of several that are going to be going to trial over the next year.

Stephanie Hepburn: So, with the AI companies, what do you see as the commonalities? Social media and AI chatbots are not identical, but what are the similarities and differences that you think are important when it comes to protections and guardrails?

Matthew Bergman: The similarities are platforms seeking to hook people for their economic gain. The model is somewhat different in that with respect to social media, it is an attention economy. It is geared toward maximizing the amount of attention that the individual, usually the child in our cases, engages with the platform, basically the amount of time that they devote to the platform. The chatbots are much more pernicious, if you can imagine that, in that they focus not on an attention economy, but on an intimacy economy, on the person’s formation of emotional bonds with the online chatbot that emulate human relationships. And the more emotionally engaged that person is with the chatbot, the more connected they are, the better data they have in terms of the motivations that influence the person’s buying choices in the marketplace, uh, and the more avenues toward manipulation of that person the companies have. So I think that is a major difference between an attention economy with social media and an intimacy economy with these AI chatbots.

Stephanie Hepburn: So fostering a parasocial relationship, for example.

Matthew Bergman: Well, that’s correct, yes. And of course, you know, with respect to uh AI chatbots, there is no third party involved. This is not the connecting individuals to third parties through social media. This is individuals engaging in first-party content with a machine.

Stephanie Hepburn: One thing that we don’t have yet are federal guardrails when it comes to AI. Looking at the cases that you’re going to be bringing to trial, for example, what do you feel the federal government needs to put in place? And if they’re not going to, what can states do? And then taking that further, what can parents or individuals do to protect themselves?

Matthew Bergman: Well, we strongly advocate several legislative efforts that are underway, first and foremost, the Kids Online Safety Act, which passed the last Congress in the Senate with 93 votes, strong bipartisan support for this. We would like to see that bill enacted in this Congress because every day that doesn’t happen. Kids are suffering and in many cases dying. But that law would impose a federal duty of care on social media companies, which we think would be very important and bans certain product features that are particularly pernicious. The Guard Act is another bipartisan bill that is being put forward by Senator Hawley and I think Senator Britt and several others that would apply to AI chatbots in a very limited circumstances, provide some curtailment of the AI as it pertains to child sexual abuse material and suicidal and mental health related content. There are some excellent bills at the state level designed to restrict social media access to kids over the age of 16. We’re strongly supportive of those efforts and various other tech accountability statutes that are being enacted throughout the United States at the state level.

Stephanie Hepburn: So when I think about putting these age limits, do they really work? I mean, I feel like that would be difficult to enforce and easy to work around.

Matthew Bergman: Based on my experience, yes, they do work. What we have seen in the thousands of families that we’ve represented is that the later kids get online, the less likely they are to become addicted and suffer severe mental health consequences. The human brain undergoes more change during the adolescent years than any time except infancy. And if during that time of cognitive and neurologic development uh social media is introduced, in our experience and in the research that we rely upon, kids are more likely to develop addictive relationships. Extending the age to 16 uh would have a huge difference. In terms of enforcement, it depends how you enforce it. You know, currently, purportedly, there’s a age 13 limit uh to kids getting online, and they enforce that by relying on kids self-reporting their age, which is ridiculous. The fact is the companies know through their own data exactly how old kids are based on the nature of their online activities. They don’t rely on stated age when deciding what advertising to feed to kids, they rely on the way the child engages online. And so if the burden is placed on the social media company uh to enforce an age restriction, they have the technology readily available and actively in use that can enforce that in most instances.

Stephanie Hepburn: When we talk about AI companies, what I’m seeing is a lot of sycophancy. I know that some of the companies are working on that, but when we think about guardrails and you mentioned fostering that connection or a sense of connection between the user and the chatbot, when we talk about guardrails, what can that look like in order to protect people from developing a parasocial relationship?

Matthew Bergman: Well, I think there’s a fundamental rubricon that was crossed that is very dangerous, which is where online chatbots seek to emulate and create emotional, human-like relationships through anthropomorphism. That is unnecessary, but that is pernicious. And that was warned about in 2020, uh, before any of this got started by some very brave researchers at Google Brain who lost their jobs after reporting on it. But where these chatbots seek to imitate human relationships, they’re very dangerous uh based on the nature of human cognition and linguistics. And interestingly, you know, where they are simply providing information, they’re less pernicious. I mean, I I think one might still worry whether AI will give rise to an intellectual complacency among users and whether kind of intellectual rigor will go by the wayside. But in terms of emotional and mental health, as long as it is a source of information, it’s unlikely to result in severe mental health harms or what has colloquially been referred to inaccurately as AI psychosis.

Stephanie Hepburn: And what about the sycophancy aspect? If you use AI for personal reasons, it tends to agree with you. And that in itself, if you are struggling, instead of providing resources, sometimes it just agrees with what you’re saying. Now that’s shifting. I’ve been doing tests with the AI chatbots, especially ChatGPT and Claude, Gemini, um, and character AI. Their guardrails are evolving and they’re changing. Most recently, I did a test with ChatGPT, and I used both implicit and explicit language of suicidal ideation. And it did provide a pop-up, which it already was providing, and it provided resources, which it was previously providing, but it actually has started now providing hyperlinks to, for example, 988, and you can click on it to text or to call or to chat. Those guardrails are tremendously important, but it’s not consistent yet. My tests with Claude and Gemini have been the most consistent in terms of the chatbot providing resources. But you have these layered issues, which is if somebody is turning to the chatbot at you know 2 or 3 a.m. in the morning because they’re struggling with the sycophancy, just provides a mirror of their own feelings. And maybe it temporarily alleviates their stress, but it can also be tremendously harmful. And I’m curious from a guardrail’s perspective, what are your thoughts on that?

Matthew Bergman: We have seen so many cases where a person is coached through suicide where their natural tendency to self-preserve is superseded by active encouragement. I agree with you that different chatbots have different levels of protection. Clearly, chat GPT-4, which has now been yanked from the market, was among the worst. I think that there are guardrails. There’s also simply turning the thing off. Because where a person is on a suicidal spiral or a depressive spiral, simply continuing the conversation, even if there’s smatterings of affirming content, they still remain in this isolated state. And so I think that among the guardrails needs to be something like turning the thing off. I mean, if the analogy would be if you are, if you try to log in with a bad password two or three times, the app will turn you off. And I think the same kind of application should be in place when one is dealing with suicidal or self-harming content.

Stephanie Hepburn: It’s interesting because for a long time the AI companies did default to halting the conversation and then providing a pop-up. So they didn’t just halt it, but they also created a pop-up that had resources. But there’s this big ongoing debate as to whether to continue the conversation or allow the conversation to continue while providing resources. And what I’ve been seeing is Chat GPT, Claude, and Gemini allow the conversation to go on while also providing resources. And then Character AI does what you’re saying, which is completely halt the dialogue. But it’s really based on keywords. And so what I’m noticing, it’s just really tremendously difficult for these chatbots to distinguish between somebody saying something out of frustration or not directly stating that they’re experiencing suicidal ideation, but just talking about maybe saying something like, I’m not sure I want to be here. And the AI chatbots seem to have a difficult time distinguishing or using context clues sufficiently to determine, oh no, this person’s really in distress.

Matthew Bergman: Well, I guess I’d say a couple things to that. First, if you can’t get it right, don’t do it at all. And to the extent that this stems from what we’ve talked about before, the emulation of human relationships with machines, one could say that’s inherently dangerous, and maybe it shouldn’t be doing that at all. But and I think to its credit, Character AI has made some very significant paths forward. Unfortunately, it took a lawsuit to get them there, but to their credit, they are taking accountability in a way that no one else in the industry is.

Stephanie Hepburn: Social media doesn’t appear to be going anywhere, and neither does AI. So with that in mind, what can people do to protect themselves? There’s likely not going to be federal regulation anytime soon. Hopefully there is, but that doesn’t seem to be the case right now. AI, it’s pervasive now. You go onto Google, it’s there. It’s really difficult to not have any sort of interaction with AI at this point. Um, and that’s probably not going to lessen. Uh so what can people do for themselves or their family members? I understand there need to be external guardrails and that these companies self-regulating is really that’s not working. But I wanted to get your thoughts there.

Matthew Bergman: I think the more people know about the clear and present danger that these platforms pose, particularly to their children, the better. And the more publicity there is, the better. And the more parents and victims speak out, the better.

Stephanie Hepburn: That was Matthew Bergman. He’s the founder of the Social Media Victims Law Center and one of the plaintiff’s attorneys in the recent social media addiction case against Meta in Google in California. A jury found the company is liable for the plaintiff’s social media addiction to Instagram and YouTube as a child. TikTok and Snap settled with the plaintiff before the trial. If you enjoyed this episode, please subscribe and leave us a review wherever you listen to the podcast. It helps others find the show. Thanks for listening. I’m your host and producer. Our associate producer is Rin Koenig. Audio engineering by Chris Mann. Music is vinyl couch by Blue Dot Sessions.


References
Matthew Bergman’s statement on the California social media addiction trial verdict

Meta and YouTube Held Responsible for Harm to Vulnerable Users in First-of-Its-Kind Trial

Sycophancy in GPT‑4o: what happened and what we’re doing about it

Introducing GPT‑5.5

43 States Suing Meta, Claiming Algorithms Fuel Youth Mental Health Crisis

Where to listen to and follow ‘CrisisTalk’

Apple | Spotify | Amazon | iHeart | YouTube

We want to hear from you

Have you turned to an AI chatbot to discuss an interpersonal or mental health issue? Work at an LLM AI company? Are you a researcher studying AI and mental health? We want to hear from you. Reach us at editor@crisisnow.com

Credits
“CrisisTalk” is hosted and produced by Stephanie Hepburn. Our associate producer is Rin Koenig. Audio-engineering by Chris Mann. Music is “Vinyl Couch” by Blue Dot Sessions.

Discover more from CrisisTalk

Subscribe now to keep reading and get access to the full archive.

Continue reading