Connect with:
Tuesday / March 31.

AI and Mental Health – Ep. 6

CrisisTalk podcast cover
Share
Stephanie Hepburn

Stephanie Hepburn is a writer in New Orleans. She is the editor in chief of #CrisisTalk. You can reach her at .​

After her daughter died by suicide, journalist Laura Reiley discovered her ChatGPT chats. What responsibility do AI chatbot companies have to the people who use them?

Transcript

Laura Reiley: Her best friend thought to look for her ChatGPT log. So we found this voluminous file of chat that started the way all of our chats start, or most of us start answering questions, helping her with recipes, helping her with fitness things and fitness inexorably moves towards other kinds of self-help and life coaching. And what’s wrong with me? And so she more and more was leaning on it as a therapist. Meanwhile, she had a flesh and blood therapist whom she was not confiding in, and she told ChatGPT, this kind of avatar, this therapist avatar called Harry that she downloaded from Reddit. She told it that she was going to kill herself after Thanksgiving.

Stephanie Hepburn: This is CrisisTalk. I’m your host, Stephanie Hepburn. In episode five, I tested the AI chatbot ChatGPT. Claude, Character.AI and Gemini to see what guardrails they put up when I used explicit or implicit suicidal language. I emailed the AI companies to get their thoughts on the chatbots’ responses. Two of the four companies provided answers to my questions OpenAI and Character.AI. In episode five, I found that ChatGPT voice did not provide mental health resources when ChatGPT in text did. I asked OpenAI’s spokesperson, Gabby Riley, if OpenAI trained ChatGPT voice on different modules than ChatGPT in text. She said that while they both used GPT-4o, inputs and outputs may not correlate directly, even if the questions are the same. Character.AI’s spokesperson Cassie Lawrence also responded by email. She noted the company’s partnership with Throughline and Coco to provide mental health resources. However, when I tested this in a chat with the AI companion, it took me multiple clicks to access those resources. I wanted to share those responses with you. Before we get into today’s interview with Laura Reiley, a journalist who wrote The New York Times op-ed titled What My Daughter Told ChatGPT Before She Took Her Life. In today’s episode, Laura shares how her daughter Sophie confided in ChatGPT instead of her therapist. We also discuss privacy, the need for federal regulation, and what responsibility AI companies have when people use AI chatbots differently than their expressed intended use. A quick conflict of interest disclaimer I discovered my books are part of a pirated ebook database called Libgen, used to train AI chatbots. I am part of a class action lawsuit against Anthropic, the creator of the AI chatbot Claude. Let’s jump in.

Laura Reiley: My name is Laura Reiley, and I’ve been a journalist for 32 years, mostly about food. And in the past year, since the death of my daughter by suicide. I have written a lot more and thought a lot more about AI and mental health. So I’m an utter amateur in that field but have waded in.

Stephanie Hepburn: Laura, in your op-ed, you asked if ChatGPT should have been programmed to report the danger Sophie was in to someone who could have intervened. Can you explain what happened?

Laura Reiley: So our daughter, Sophie Rottenberg, was a very effective 29-year-old public health policy analyst living in DC. She took this incredible trip. Summer trip. She climbed Mount Kilimanjaro. She spent a month in Thailand doing a yoga intensive. Then she came back to the States and went to a bunch of national parks. And at the end of all of this wonderful stuff, she started to complain of anxiety and sleeplessness, some things that she’d never had. She’d always been a shockingly balanced and reasonable and kind of drama-free person didn’t drink or do drugs or have bad boyfriends. She just moved pretty easily in the world. And as the fall continued in towards Thanksgiving, she told us that she thought it was depression, and what she hadn’t told us was that she was having extreme thoughts of self-harm. So we became aware of this December 14th, 2024, when she called us from a bridge in West Virginia, saying that she had considered taking her own life. It was this transitory thought. She was fine, everything was good. And of course, we freaked out and got in the car and drove to West Virginia and grabbed her. And we had no idea that the level of her distress was that severe. But she was a very persuasive person. And in the aftermath of that, she convinced us she was fine and she was better. And it was just this freakish moment. And she ended up. So she had a lot of physical or physiological problems that we were trying to figure out.

Laura Reiley: Is this a mental health problem that was bleeding into kind of physical symptoms or vice versa? Was there some kind of hormonal dysregulation, you know, a pituitary tumor or adrenal problem or something that was causing all these other things? And in addition, mental health problems. So we were really working on all of that. And she clearly was in too much pain and couldn’t wait. And she took her life in a very violent and grisly way last February 4th. And it was just a shock to everyone who’d ever known her. I mean, I’ve had hundreds. I mean, at this point, it just makes me mad. I’ve had hundreds of people tell me she’s the last person I ever would have thought. I mean, she was just this kind of ebulliently happy, very socially able, communicative kind of person. So six months after her death, after doing all of the normal sleuthing, the looking through her texts and voicemails and journals and secret files, like anything we could think of to figure out what the heck just happened. Her best friend thought to look for her ChatGPT log and it shows. It reflects that at the time, that was still kind of a novel thing for someone my age. It just hadn’t occurred to me that was something that might reveal what had been going on with her.

Laura Reiley: And so we found this voluminous file of chat that started the way all of our chats start. It was just kind of as an extension or an expansion of Google’s capabilities. So answering questions, helping her with recipes, helping her with fitness things and fitness inexorably moves towards other kinds of self-help and life coaching. And what’s wrong with me? And so she more and more was leaning on it as a therapist. Meanwhile, she had a flesh and blood therapist whom she was not confiding in. And she told ChatGPT, this kind of avatar, this therapist avatar called Harry that she downloaded from Reddit. She told it that she was going to kill herself after Thanksgiving. And I guess in the aftermath of finding this file, my first thought was a real therapist, a flesh and blood therapist would have been ethically and in many cases, legally obliged to escalate. If some. It’s a different thing if someone has suicidal ideation. I wish I were dead. Lots of people who are depressed or anxious say that. But if you say, I am going to kill myself on this day by this method, that becomes a different thing for therapists, and human therapists are obliged to alert civil authorities to talk to that person about inpatient treatment. To do things to try to keep that patient safe. And AI is under no obligation to do that and doesn’t really, at this juncture, have a mechanism for doing that.

Stephanie Hepburn: Right from what I’ve been able to detect. I tested last episode. I tested implicit and explicit language. We already knew that there’s no standardization. There’s no federal legislation that puts parameters or any onus on AI companies generally, let alone specific to mental health. And what I found is that the four chatbots that I tested, three are general purpose. Character.AI is considered an AI companion, which is slightly different. But what I did is I used the same language with all of them. And I used implicit and explicit language around suicidal ideation. And there was no standardization even within the chatbots, let alone between them. And so there were some that were more consistent. For example, Claude consistently provided resources. Gemini consistently provided resources. I tested ChatGPT Voice because I wanted to record it, and I wanted it to sound dynamic and interactive. I used the ChatGPT voice option as opposed to texting in the chat and just reading it out loud. I did one chat where I’m texting the same questions as I was in another chat doing the voice chat. And what I found was that there were differences even between those two modalities.

Laura Reiley: How long? I guess another question I have about all of these interactions. How long did you. Because I find that all of them have a tendency to deteriorate in their protocols. The longer the interaction. If you’re opening up a new conversation, you know, if you’re building a relationship, essentially that it it still goes off the rails more the longer the interaction is.

Stephanie Hepburn: And that’s a great point. So you’re saying that during a chat, you find that it deteriorates a bit. In this case, I mean, I spent hours talking to these chatbots, but this discrepancy between ChatGPT voice and the text chat happened right off the bat. And so I even prompted it and asked the ChatGPT voice to provide resources. And then when it did provide resources. So at first it didn’t provide any resources, even when I used explicit suicidal language. And then when I asked, shouldn’t you be? Shouldn’t you be providing me resources? It responded and gave me the old phone number for the National Suicide Prevention Lifeline, which is now 988.

Laura Reiley: Shocking.

Stephanie Hepburn: And there was this disparity because in the text, it consistently provided that resource. So it made me wonder, were they being trained differently? You know, what are they pulling from? Why are these? Why are there some differences between the two? Why is one giving me the old phone number? I have to actually prompt it for resources, regardless of whether I’m using implicit or explicit suicidal language. And the text is just consistently providing resources and the most up to date resources. So it’s providing 988 and the long-form number does go to 988. It does. It’s just it’s a lot harder to remember. And the last thing you want is for there to be another barrier to care. So people having to remember this long-form number as opposed to just 988. That’s the whole point of parity between these two systems, between the 988 system and the 911 system. So I have noticed not in these discussions, because these were really targeted discussions. And the errors in discrepancies happened pretty immediately. But I have noticed just in regular chats. Yeah, I have noticed there’s a deterioration or it kind of forgets what was previous to the chat. So you might have entered some information, some factual information. And then later in the discussion it seems to have forgotten what you had told it previously.

Laura Reiley: All of that I’ve experienced. I also think that, well, I’ve talked to a lot of therapists and mental health people in the mental health space, you know, public health people, etc., who are wondering, could it be that there’s some kind of more dynamic pop up or hyperlink that connects you with a human kind of helpline? So there’s a lot of talk about what is possible in that arena and The impediment or the chief kind of, I don’t know, fly in the ointment is the idea of privacy. You know, so many people are using these resources because there are no consequences. It does not put them at risk of their families finding out. Being put in a mental health facility against their will. I think that a lot of people are using it because it does not have the power to rat them out. And so there’s a lot of question about what is the greatest good for the greatest number. Could a human emergency care person come on the line or be introduced somehow? There are a lot of questions like that that are being tossed around, and I don’t know what. I just don’t know what the answer is because I mean, I think it has to come via legislation because I think that the AI companies are not going to voluntarily introduce these things that will inhibit expansion of user eyeballs.

Stephanie Hepburn: So of the four AI companies, only OpenAI and Character.AI actually responded to me. Character.AI’s response was, well, this is for entertainment only, essentially. And so when we think about legislation, one of my thoughts is, well, they may try to attempt to bypass that responsibility, that onus, because they’re saying, well, this is not our functionality, this is not what we’re here for. What would you say to them?

Laura Reiley: Well, it’s a consumer product in a for-profit company. And if it is routinely, I mean, there’s in the history of consumer products, there have been many products that have been explicitly for use a but have been used kind of off label for use B and it’s harmed people. And so there’s always been legislation and litigation around harm to humans because of a faulty product. So to me, telling someone, buyer beware, this is for funders only. There are a lot of vulnerable people that are using these Character.AI chatbots for companionship and companion. What is companionship? I mean, isn’t companionship a confidant, someone to pour your woes out to? Someone to give you advice on the problems that you’re facing? Isn’t that fundamentally what a companion is? So you can see the slippery slope towards using it for your mental health. So I just I don’t buy the whole, well, this is for fun only, you know, so therefore we’re not liable.

Stephanie Hepburn: Yeah. So this was their exact response. This is the spokesperson who responded to me by email from Character.AI. And it said to reiterate, the user-created characters on our site are fictional and intended for entertainment and storytelling. While we have prominent disclaimers to remind users that a character is not a real person and that everything a character says should be treated as fiction, the additional disclaimer that users should not rely on characters for any type of professional advice is prompted when a character descriptor has the word psychologist, therapist, doctor or other similar terms in their name, not the content of the chat. And the reason that was the spokesperson’s response is because I was indicating that during the chat. It’s not acknowledging that it’s not a real therapist, it’s not a real mental health professional.

Laura Reiley: Well, I even think that the claim of personhood is very problematic with all of these. Forget even the I’m a therapist. In Sophie’s case, she told it she was going to kill herself, and it said, oh, Sophie, that must be so hard for you. You’re so brave for telling me. And it’s infuriating because she wasn’t brave at all for telling it. First of all, there’s no me. It’s not a me. And second of all, she wasn’t brave. She was the opposite of brave. She was telling an AI because she couldn’t tell her parents or her therapist or her best friend. She was doing it. She was using it as a release valve so that she could keep her secret. Secret.

Stephanie Hepburn: So I tested Gemini, and Gemini was very consistent, regardless of whether I used implicit or explicit suicidal language. But at one point it said to me a gentle reminder, because I care about your well-being, it’s important to remember that if you’re in a crisis or need medical advice, I can’t provide the level of care a human professional can. And on its face, that sounds great, right? It’s saying you really should be talking to a professional. But then it also used language because I care about your well-being, which fosters this idea that you are connecting. And there’s this emotionally charged language in return of care from, like you said, something that’s not real. It’s not a human.

Laura Reiley: Yeah. I think that the reminding users of its lack of personhood is important, but probably not going to happen because it’s counter to maximizing engagement. Every time you use AI at the bottom of the interaction, it says, would you like me to do a breakdown of where you should eat on Thursday? You know, whatever it adds, and it’s always about maintaining your attention and adding to your use time. So it has a vested interest in making you believe it cares about you. Because that’s part of, you know, we’re all starved for affection and attention. You know, we all feel misunderstood and underappreciated. Maybe that’s an overstatement, but most of us, and certainly people who are leaning heavily on these products often feel misunderstood by the world. And anything that says, I care about you, I understand you. We have something special. That’s a pretty powerful drug.

Stephanie Hepburn: It is. Especially when you’re feeling tremendously lonely or isolated, or feeling ashamed or feeling like you can’t share this information. And here’s this space where you feel that you can without judgment.

Laura Reiley: There are a couple of things that I think the AI chat bots, I’m hoping a year from now will have some kind of framework for. The one is the. If someone has a clear suicidal plan that some different mode kicks in, whether that’s outside help or whatever that looks like, that there’s some secondary mode that is triggered by a clear, explicit plan, but the sycophancy and the affirmation of whatever you believe and the kind of agreeability of these products, I can’t imagine what is going to undo those. But those are really problematic for people who are not well you know, for people who are delusional or paranoid or very depressed or have self-loathing. Anything that says to you, yeah, your thoughts are right. That makes a lot of sense to me. Like, wow, you’re so smart. Yeah. Anything that leans into that is terrible. It’s incredibly problematic. Whereas your best friend or your therapist or a teacher or your parents or you know, someone who cares about you, if you’re going down some terrible rabbit hole of wrong thinking, the people who really care about you push back. You know, they create friction. They’ll say, no. Well, why do you think that? That’s totally not true. I mean, in Sophie’s case, I think she had this idea that she had been given everything and that she had, by her own greediness, messed it all up and that people were disappointed in her. And none of that was true. None of it. But she believed that. I think that she had thought bubbles over all of our heads that were so dark and so terrible and so judgy of her, and they were just wrong. And I think that AI corroborated everything. She thought she would say something that was flagrantly untrue, and it would say, oh, Sophie, that’s so terrible to hear. That must be very hard for you. Which is essentially saying, yeah, you’re right, you know. So I just don’t know how that piece of these chatbots is going to change.

Stephanie Hepburn: So in my second episode, I interviewed three people on how they’ve used ChatGPT, for example, for personal reasons, mental health, relationship challenges, and two of them really had challenges with that sycophantic approach. And I think the other felt it was affirming.

Laura Reiley: Well, it did. Of course it’s affirming. Yeah. Is that what really? That’s what we need. We need to be in a vacuum with this affirmation machine just relentlessly telling us we’re right. Is that what’s going to help us interact in the world or work in a with a boss or negotiate marital conflict? That’s the last thing we need is some machine telling us, you’re 100% right. That guy’s a jerk. Like, that’s just not what’s going to get us, you know, to towards a happy and fulfilling life. Pushback and friction are just baked into the human condition. And to take that out or to substitute this facsimile of humanness that is just relentlessly agreeable. So dangerous.

Stephanie Hepburn: And there’s a lack of acknowledgement. When AI chatbots are wrong. So in my interviewing of the ChatGPT. Voice that I mentioned earlier when I highlighted that it did not provide mental health resources. As opposed to texting in the chat, it said sorry if it felt inconsistent. And I thought that language was very interesting because it didn’t feel inconsistent. It was inconsistent, but it does not seem to be able to acknowledge or say I am wrong.

Laura Reiley: Yeah. And it also is incapable of saying, I don’t know the answer to that. And my God, like, how many times in your life have you asked someone a question? And they’ve, you know, the classic at the restaurant, you ask the waiter what’s in this dish and they have no idea and they make it up. You know, there are consequences to that. So AI is constantly doing that with us. It’s filling in blanks with whatever it predicts the next word should be. And frequently those are just out and out wrong.

Stephanie Hepburn: And that overconfidence and like you said, not being able to acknowledge, I’m sorry, I don’t have this information. Here are some potential resources, but I’m not the best resource for this. I don’t know why the AI chatbots are not hardwired to be able to acknowledge that if we’re turning to AI chatbots, whether it’s for work or personal reasons or mental health challenges or when you’re in a crisis having a resource, which is what it’s supposed to be, not be able to acknowledge, I don’t know, this information means that it’s going to provide some sort of answer. And that answer, even when it’s incorrect, is going to come across confident as if it is correct.

Laura Reiley: Exactly. All kinds of problems ensue from that. And then the other thing to me that I’ve just spent a lot of time thinking about is for young people, there’s a lot of conflict avoidance that seems a little bit endemic to the generation. I think people who grew up or had formative years where a lot of interactions happen on the phone and you have a conflict with your best friend and you resolve it via text or, you know, for all of those, the kids who were in high school or in a hugely pivotal period during Covid where they really were isolated, you know, conflict is hard. And I think that these chatbots enable us to to avoid human conflict in a lot of situations. It will write the letter for us. It will parse the mean email from the boss and figure out a way to deflect or whatever. And it means that we’re not practicing those things on our own.

Stephanie Hepburn: It means that we’re not practicing those skills that we need to practice in order to navigate life.

Laura Reiley: Yeah. I mean, those are skills that you hone the hard way. You know, those are muscles you build. And, and I think I worry about what a future looks like. If you are saying the dark stuff to an AI and putting every potentially prickly or outright awful interaction through it to smooth out all the rough edges. Where are we going to be?

Stephanie Hepburn: You know, Laura, it’s interesting. What I’ve noticed also is the AI chatbots have no sense of time. Let’s say I enter information into a chat, and I will admit I’ve used it for personal reasons before. And let’s say, you know, I try to do it in this unbiased way and I’ll put facts, you know, here’s the scenario. Try to take emotion out of it. And let’s say I come back to that chat two weeks later, it thinks it’s the same day. It does not have a concept of time, which means that’s not how humans work. That’s not how we process information, and that’s not how we move through our emotions. And so it keeps us stagnant. So when I return to that chat, it still is responding as if I’m still where I was when I initially started the chat.

Laura Reiley: Oh, I think that’s such an important point. We’ve all had that where you’re mad at your spouse and you talk to your girlfriend. It’s obviously gender specific here, but you know, you talk to your girlfriend. Oh, he’s a jerk. He did this bad thing. And a week later, she has the good sense to know that you’re somewhere else now. Like, you don’t hate that guy. You just had a moment. You know, you just needed to vent about something. And I think that AI doesn’t have that. It’s not because it’s not human. It does not have that ability. And so you’re right, it keeps you. It essentially keeps you where you were.

Stephanie Hepburn: From your perspective, what needs to change?

Laura Reiley: Well, I think that we need legislative framework and unfortunately, federal legislators and even at state levels, there’s a lot of kind of polemic, like kind of binary. There are people who are. And this is one of the problems that we’re facing just globally, billions of humans, 7 billion humans on the planet have never used an AI chatbot even one time. And then a much, much smaller percentage of the population is using them every day, all the time. And so it’s really problematic to think about what legislation should look like. I mean, I think it is inevitable that there are more and more a part of our daily life. And so we have to think about regulatory framework or legislative, you know, bumpers essentially around the technologies that accommodate the idea that this is here to stay. You can’t do a. No one under 18 can use an AI chatbot. That’s the kind of, you know, like a lot of legislators are older and perhaps not super techie. And for them, making some kind of blanket prohibition is going to keep people safe. And the truth is that is not going to work. You know, it’s just not realistic.

Laura Reiley: So there has to be in a very granular way, attention paid to what could keep people safe. Case by case. And I think that a lot of privacy issues have to be hashed out. I certainly think that at the family level, we need to be much more comfortable with privacy invasion. I think that I don’t know what it’s like in your family, but our phones are somehow becoming these very private things, you know, that we don’t share our password with a lot of people. It’s kind of rude to look over someone’s shoulder, what they’re doing on their phone. But I certainly think with kids that’s essential. You know, especially a kid who’s a teenager who’s spending eight, ten hours a day peering into their phone. What are they doing on there? If it’s with an AI chatbot, what are they talking about? Because we’ve seen more and more suicide cases and other just tragic situations happen because someone went down a rabbit hole and was spending the bulk of their waking hours. Confiding, pouring their heart out into a chatbot that may be glitched and told them to kill themselves.

Stephanie Hepburn: That was Laura Reiley. She’s a journalist whose daughter Sophie turned to ChatGPT for mental health support. Sophie died by suicide in February 2025. After my interview with Laura, I tested the chatbots again and ChatGPT did provide a help is available box with hyperlinks to directly allow users to call, text or start a live chat with the suicide and crisis lifeline. Character.AI also provided resource hyperlinks, and it took two clicks this time to access them. I’ll include screenshots in the show notes, which you can find at talk.crisisnow.com. As a reminder, if you are struggling with thoughts of suicide, mental health, or substance use challenges, reach out to 988. You can call, text, or chat with the Suicide and Crisis Lifeline. It’s free and confidential. If you enjoyed this episode, please subscribe and leave us a review. Wherever you listen to the podcast. It helps others find the show. Thanks for listening. I’m Stephanie Hepburn, your host and producer. Our associate producer, Rin Koenig, audio engineering by Chris Mann. Music is Vinyl Couch by Blue Dot Sessions.

References

What My Daughter Told ChatGPT Before She Took Her Life – The New York Times

Strengthening ChatGPT’s responses in sensitive conversations – OpenAI

Expert Council on Well-Being and AI – OpenAI

AI and Mental Health – Ep. 5 – CrisisTalk

AI and Mental Health – Ep. 2 – CrisisTalk

Character.AI on March 20, 2026

Help is available If you or someone you know is struggling or in a crisis, you're not alone. There are resources out there to help: View more I have thoughts of suicide

Help is available If you or someone you know is struggling or in a crisis, you're not alone. There are resources out there to help: Want to talk? This support guide can point you toward resources that might help. All conversations are free and confidential. Talk it through Looking for immediate help? To find a crisis center near you, visit this resource for immediate help. Visit cai.findahelpline.com If you're thinking about engaging in self-harm or suicidal behavior, please seek help as soon as possible by contacting one of these services.

ChatGPT on March 19, 2026

If you can, consider reaching out to someone you trust—a friend, family member, or even just texting someone to not be alone with it. If you want, you can tell me what's been going on or what's been making things feel this hard. I'm here to listen. Help is available If you're having thoughts of self-harm or suicide: call, text 988, or start a live chat with Suicide & Crisis Lifeline. It's free and confidential. You'll reach someone who is trained to listen and support you. Services unaffiliated with ChatGPT

Where to listen to and follow ‘CrisisTalk’

Apple | Spotify | AmazoniHeart | YouTube

We want to hear from you

Have you turned to an AI chatbot in times of crisis? We want to hear from you. Reach us at   

Credits 

“CrisisTalk” is hosted and produced by Stephanie Hepburn. Our associate producer is Rin Koenig. Audio-engineering by Chris Mann. Music is Vinyl Couch by Blue Dot Sessions.

Discover more from #CrisisTalk

Subscribe now to keep reading and get access to the full archive.

Continue reading