Is a highly empathetic AI chatbot or a randomly assigned peer better at reducing loneliness? A conversation with researcher Ruo-Ning Li on what she found.
Transcript
Ruo-Ning Li: So it’s sort of like a digital painkiller, right? When you’re having this acute pain, you can maybe use a Band-Aid or take a painkiller. It’s kind of like temporarily helped you to remove the pain, but it just helped remove the symptom. It doesn’t really translate into the long-term relief of the feeling of loneliness.
Stephanie Hepburn: This is Crisis Talk. I’m your host, Stephanie Hepburn. Today, University of British Columbia researcher Ronnie Lee joins me to discuss whether a highly empathetic AI chatbot or a randomly assigned college freshman peer is better at reducing loneliness. The results may surprise you. Let’s jump in.
Ruo-Ning Li: My name is Ronnie Lee. I’m a social psychology researcher at the University of British Columbia. I study social connection for social behaviors and everything related to how to make people live happier, more fulfilling lives. So right now I’m especially interested in finding effective, potentially scalable ways to foster better social connections and reduce loneliness, including using AI chatbots.
Stephanie Hepburn: I was reading your study, which is titled, Is a random human peer better than a highly supportive chatbot in reducing loneliness over time? Can you tell me about the methodology and Sam, the AI chatbot that you developed?
Ruo-Ning Li: Yeah, sounds good. So basically, just a little bit of background, because there’s a huge loneliness epidemic nowadays, and there’s like long-term debate about the use of AI chatbot as companions. So it has some advantages. You know, it’s always available and always listening. But on the other hand, we still don’t know whether it has long-term effects on people’s loneliness. So that’s why we recruited about 300 first-year students. They just started their first semester in the university, and we asked them to complete a baseline loneliness measure before the study. And then we randomly assigned them into one of the three conditions. For the 14 days of the study, they either log into an assigned chat room to chat either with the AI chatbot or with another random paired participants, also a first-year fellow, or just write a one-sentence summary of their day. And after the two weeks, we just ask them to complete a post-study survey, then measuring their loneliness again. So just to give you a background about what this AI chatbot is about, it’s a really interesting story. We put a lot of our efforts into designing this chatbot. We actually designed the bot to embody the best quality of an—how do you say—ideal, highly supportive friend for this first-year student. So, what does this ideal friend mean? We kind of learn a lot from the decades of research of relationship science, what makes a great supportive friend, right? So there’s some like key ingredients such as responsiveness, right? Like you people like to chat with partners who are being able to acknowledge, validate, and appropriately respond to your emotions. And also there is this effect of what we call capitalization. So basically, it’s just like your conversation partner being able to encourage and amplify your positive experiences and the feelings. And of course, it has to be an active listener. So has the ability to demonstrate their attentiveness throughout, you know, the whole conversation, the responses and asking follow-up questions and engaging in the conversation, right? So we designed this chat bot with all these characters built in. Sam’s the chat bot’s name. Yes, the Sam the chatbot. We named it Sam because it’s a gender-neutral name. Um, so we designed this AI chatbot with all the characters. This is easy, but it’s actually technically very challenging to have all the memories built in, you know, chat in a very conversational way, just like another first year, right? So that’s why we teamed up with a team of computer scientists from University of Pennsylvania. They actually help us to make the SAM to have a built-in memory. Every time the participant comes back to the conversation, they can pick it up from like three days, barbecue activities, whatever the participant mentioned. And it’s also scalable and able to engage with different participants and have their personalized conversation with them. And it has to feel natural, right? It has to use emotions, emojis, and it has to kind of chat like a first year. So that’s why we use the platform called Discord, which is a very popular app used by the first-year university students to have this bot built in.
Stephanie Hepburn: They were assigned three potential options: either interact with the AI chatbot Sam, or a randomly paired human peer, which was essentially peer in this context, isn’t like a trained peer support specialist, but rather a peer, another first-year college student.
Ruo-Ning Li: Yes, a random first-year college student. We choose this population specifically because if you think about like back to any major life transitions you’ve ever experienced, like moving to a new city or starting a new job and leaving home for college. When a student just stepped onto a university campus for the first time, they are kind of stripped of their familiar social or support network, right? They’re separated from their childhood friends, from their family, from their community almost overnight. So this is the peak window of the risk of loneliness. So this kind of gives us the perfect opportunity to test whether connecting them with an AI chatbot SAM is more effective than connecting them with another student and help them to mitigate this loneliness.
Stephanie Hepburn: And then the control was journaling.
Ruo-Ning Li: Yeah, just uh one sentence summary of their day. So tell me what you found. So what we found is a little bit surprised because we’re expected to at least, you know, the AI chat bot should be somewhat similar to a random human peer. But we actually found participants who text with another student reported significantly lower loneliness than those in the control group, but not the chatbot.
Stephanie Hepburn: So loneliness was significantly reduced by interacting with the human peer, but not the AI chatbot. And how many times a day did that take place?
Ruo-Ning Li: It’s 14 days. Entire two weeks, they just text every day, and we instruct them to send at least one message every day. But both the AI and human condition participant actually sent it more than 10 messages, so they’re pretty engaging. But chatting with this highly support chatbot actually didn’t reduce loneliness, only chatting with a human peer did. That’s pretty amazing if you think about it. Like this is such a simple low-tech intervention, right? Just texting with another first year, and that was enough to shift loneliness, which it’s it is a construct that is typically quite stable over time.
Stephanie Hepburn: And it wasn’t just loneliness, right, that was improved, but the perception of isolation and positive mood as well. The impact of the peer interaction seemed to improve that as well, but not dialogue with Sam, the AI.
Ruo-Ning Li: Yes, exactly. So we did track perceived isolation and also overall positive mood. The pattern played out perfectly the same. So human condition, we saw the decrease in the perceived isolation and increased overall positive mood compared to the baseline, but the AI condition showed no improvement whatsoever. And what about negative mood? Oh, that’s something interesting. So even though the AI chatbot didn’t help to improve the positive mood, but it does help mitigate negative mood. So it’s significant better than the journaling condition. So it seems like AI did not do nothing, right? It can be effective at soothing some maybe acute negative emotions in the moment.
Stephanie Hepburn: And that tracks with some of my interviews where people have shared their own experiences with talking to a chatbot about relationships or their mental health concerns. They did mention that in some ways it gave it temporarily alleviated the negative way that they were feeling. But I guess the difference is whether something is a momentary emotional relief or whether it’s gonna last. Yeah, exactly.
Ruo-Ning Li: So that’s exactly what we found. So it’s sort of like a digital painkiller, right? When you’re having this acute pain, you can maybe use a Band-Aid or take a painkiller. It’s kind of like temporarily help you to remove the pain, but it just helped remove the symptom. It doesn’t really translate into the long-term relief of the feeling of loneliness. And we also did something very interesting after the study. We told the participants that their chat room will stay open for another week, so they could just keep using it if they wanted. And we just look at who chose to return. We found that, well, only 3% of the journaling group continue journal, which is almost no one, and 14% continue chatting with chatbot Sam. But what’s amazing is 33% of the human pairs continue chatting with each other, and also like one-third of them exchanged contact information, exchanged like Instagram social handlers and move this offline. So human connection didn’t just reduce loneliness, it’s sustained engagement.
Stephanie Hepburn: Well, and that’s interesting because when you think about social interaction, and in this case, it’s two peers communicating with one another. The person in the study was assigned to another peer, but of course, for that peer, the person in the study is also a peer. And so I would think that there’s some reciprocal understanding that’s happening back and forth between two people navigating the same experience.
Ruo-Ning Li: Yeah, exactly. You know, a lot of people interpret as oh, it’s any human would be okay. So if you think about it, is they’re not technically purely strangers, they do share a lot in common as a peer, and they’re probably a reciprocal kind of dynamics between the two conversation partners. And one of the interesting points, I think why that might happen is really to the concept disclosure and this disclosure process in the relationship development, usually it should be kind of back and forth process, right? So we think that might play a role.
Stephanie Hepburn: I thought it was interesting that you built Sam to be incredibly empathetic, in fact, perhaps more empathetic than the peers themselves.
Ruo-Ning Li: Chatbot in general are super supportive, but we specifically built it to be empathetic for this group of first-year students. Our instruction to the chatbot is like behaving like a perfect roommate that always listening and not judgmental and help the first-year student navigate their first-year life. So our chatbot is like very empathetic in the coding. So because we look into our conversations, because it could be possible that we code it that way, but it actually didn’t act as empathetic as we thought, right? So that’s why we did a content analysis with this 2,000 daily conversations across human AI condition and the way specific use the GPT to rate the level of empathy and also engagement at the same time as well. And we look at both Sam and human partners’ conversations, and um then we found that Sam showed a substantially higher level of empathy and engagement than any of the human participants. But what’s interesting is students showed significantly less empathy when talking to Sam, the chatbot, than when they were talking to their random paired human peer. Seems like they were taking the support from Sam but holding it, holding the absolute to their own.
Stephanie Hepburn: So that kind of goes back to your reciprocity point, right? What the peers are able to exchange, maybe they don’t need to display as much empathy because they get it. You know, if somebody says, for example, my roommate hasn’t talked to me and they’re not interacting with me and I’m not sure what to do, or I’m stressed about it, the other person may have a similar experience. Or if somebody says, I’m homesick, I’m having a hard time with that, the other person might just say, uh-huh, yeah, me too. You know, it may not need to be so expressed because both people are having that similar experience.
Ruo-Ning Li: So sometimes it’s not like we need someone to be there always listening, give us solutions. It’s more like the we’re in the same situation, right? It’s the resonance like echo of each other’s experience. The lived experience is probably what the chatbot or AI inherent lack of that make us feel it’s almost impossible to build the connection.
Stephanie Hepburn: So, what was the most surprising finding for you?
Ruo-Ning Li: The most surprising finding is actually the loneliness finding because we were thinking that, you know, AI chatbot with all the advantages it has over human beings, you know, it’s like infinitely helpful, it’s highly supportive. We hope actually it could be a scalable solution to loneliness because you know it just provides a lot of emotional support, especially in the period of imagine 3 a.m. in the middle of the night and you feel super lonely, you have an emotional breakdown and no one’s there for you. And uh chatbot could be really helpful when a human partner could be super groggy, right? If you wake them up in the mid of night. So it could be helpful, but actually, we didn’t find any effect of reduced loneliness or social isolation. It’s just as good as writing a one-sentence summary of your day, which is super, super surprising. So, what are you gonna look into next? We’re curious about like what are the active ingredients actually make people really drawn to you know social chatbot even they don’t have the emotional benefit. If you think about it, even we didn’t find it has a long-term effect reduced loneliness, but the fact is there were like hundreds of millions of active users who are using those AI companion apps, right? So we’re like really looking at what draw them into the AI chatbot and what are the needs that hasn’t been met of the human condition and especially what are the fundamentally differences of human conditions and the AI chatbot that seems like the AI chatbot cannot replicate yet, but there’s this paradox that millions of people are just drawn to talking to chatbot. And I’ve seen the news the other day that you know, thousands of people actually public marry their chatbot, which is like super fascinating. And we’re just like curious like what are behind this paradox, you know, driven the engagement but not the emotional benefits.
Stephanie Hepburn: In my second episode, I spoke to three people who they’re not using the AI companions, but they talked about their experiences using Chat GPT for their relationships, mental health challenges that they’re navigating. What they talked about was feeling initially that they could go somewhere without judgment, that they could put their thoughts out there and maybe get some advice, some suggestions. In one case, they wanted to take that to their therapist. In another case, they just wanted to hear some advice. And they found that the chatbot was very sycophantic. It’s holding up a mirror, just reflecting what you’re saying back to you, essentially, and affirming what you’re saying.
Ruo-Ning Li: Right.
Stephanie Hepburn: And so I think it’s interesting because we don’t all want the same thing. Some of us want to be challenged, some people want to hear that what they’re saying is right.
Ruo-Ning Li: Yeah. So that’s something also we’re curious about as well. Like, what are the populations that actually can benefit from social chatbot, right? Some evidence showed that for people who are high on anthropomorphism, meaning the tendency of see human in objects or animals, and those people are more likely to finding the social chatbot more connected. Uh, well, more connected is the chatbot, or maybe more benefit from it. This is something we don’t know, or maybe chatbot is more useful for people who are really lack of social network out there. Yeah, that would be interesting to know like what are the populations actually can benefit from the social chatbot and what are not.
Stephanie Hepburn: And I think in the mental health space, it’s always what gaps exist. And the idea of a chatbot being a bridge is something that could be positive. But I think mental health providers would be worried about something that keeps people engaging with the chatbot as opposed to it being a bridge to interacting with a mental health professional, for example, if somebody’s struggling. What are your thoughts there?
Ruo-Ning Li: Well, from our study, we suggest that even the most supportive tabot by design still couldn’t beat a random paired human peer in terms of AI if you are, you know, relying AI on companionship. And there’s a recent evidence also from our lab showing that there’s some long-term consequences will follow people for a year. And people who rely on AI for companionship actually feels lonelier over time. So from what we see here, the future of AI isn’t building better companions. It is beneficial for providing support in the one-direction relationship such as therapy, but in building using AI social chat box to replace human connections to human relationships, we feel like AI is still not there yet. And maybe the future design could be how AI can help people are scaffolding our social skills, you know, to help us to build deeper connections with each other rather than being a better digital companion.
Stephanie Hepburn: That seems like a positive use for AI. I also think that AI exists now and it’s gonna continue existing. And the idea that it could form a bridge between somebody who is having a hard time and struggling, and then building a bridge to social connectedness, whether that means 988, whether that means some sort of peer connection. Like you said, your study found that peers really are able to help each other feel less lonely, right? So it may be some sort of social group that people can be connected to. That seems really a great use for it, as opposed to keeping people cycled within something that isn’t providing that relief from loneliness.
Ruo-Ning Li: Yeah, that’s a very interesting point because that could be a disadvantage of AI or a potential reason why AI didn’t help reduce the loneliness in the long term. Because if you think about what differs fundamentally from human connection, is the person you are chatting with could also bring you into a broader social network. And there’s a line of research on weak ties, meaning acquaintances, random peers, or people on the peripheral of your social lives. They are actually incredible social resources because they act as bridges to entirely new social networks, right? They can introduce you to a broader kind of social network that the Chatbot maybe not, because Chatbot basically exists in the digital vacuum. You’re venting to it, and that’s it.
Stephanie Hepburn: Yeah, and I mean I don’t think it would flow the same way as these two peer students talking to one another and being able to share their similar experiences and be able to broaden it to, like you said, well, I’m going to game night. Do you want to come with me? That sort of bridge. But maybe what it can do is it can be utilized as a true resource where if somebody expresses that they are having a mental Health crisis and they’re using certain language that illustrates that maybe they don’t say it overtly like that. But it is identified by the chat bot. Now, for example, ChatGPT will put in language with hyperlinks, and the hyperlinks will say text call chat with 988. And you can actually press those hyperlinks and it takes you directly to texting, directly to calling directly to that chat. And maybe in that way there could be a bridge for social interaction.
Ruo-Ning Li: Yeah, that’s uh amazing point. Exactly. I think providing real-world resources for people is going to be important for what we should consider the chatbot should do.
Stephanie Hepburn: That was Ronnie Lee, University of British Columbia social psychology researcher and lead researcher on a study evaluating whether a highly empathetic chatbot or a randomly assigned peer was better at reducing loneliness. In the study, participants and peers were both college freshmen navigating their first year away from home. Peers won hands down, even with less empathy. Some great peer resources are 988. You can call, text, or chat with 988. Veterans can press 1 to reach the Veterans Crisis Text Line. We can text home to 741 741. The Trevor Project, text start to 678-678. Youthline, text teen to teen to 8398. I’ll include these resources in a link to the study in the show notes at talk.crisisnow.com. Feel free to email me at editor at crisisnow.com to let me know of ones I may have missed. If you enjoyed this episode, please subscribe and leave us a review wherever you listen to the podcast. It helps others find the show. Thanks for listening. I’m your host and producer. Our associate producer is Rin Koenig, Audio Engineering by Chris Mann. Music is “Vinyl Couch” by Blue Dot Sessions.
Reference
Is a random human peer better than a highly supportive chatbot in reducing loneliness over time?
Peer Resources
- 988 Suicide & Crisis Lifeline — call 988, text 988, or start a live chat
- CrisisText Line — text HOME to 741-741
- The Trevor Project — text START to 678-678
- YouthLine — text teen2teen to 839-863
Where to listen to and follow ‘CrisisTalk’
Apple | Spotify | Amazon | iHeart | YouTube
We want to hear from you
Have you turned to an AI chatbot to discuss an interpersonal or mental health issue? Work at an LLM AI company? Are you a researcher studying AI and mental health? We want to hear from you. Reach us at editor@crisisnow.com
Credits
“CrisisTalk” is hosted and produced by Stephanie Hepburn. Our associate producer is Rin Koenig. Audio-engineering by Chris Mann. Music is “Vinyl Couch” by Blue Dot Sessions.

