Conversation with Shums Alikhan, lead researcher of a new study on how young people are using AI for mental health support.
Transcript
Stephanie Hepburn: This is CrisisTalk. I’m your host, Stephanie Hepburn.
In AI news: The AI company Anthropic is suing the US government for banning all federal use of its large language model, Claude. According to a Time article published on March 11th, the Pentagon had attempted to renegotiate Anthropic’s government contracts to allow all lawful use of a classified version of Claude. In response, Anthropic CEO Dario Amodei requested two specific guardrails: that the large language model not be used for mass surveillance of Americans, nor in fully autonomous weapon systems.
The government didn’t agree, calling Claude a supply chain risk and prohibiting federal use of the AI model. Anthropic is now suing the government to stop the ban, and on March 26th, a federal judge in California temporarily blocked the government from blacklisting Anthropic and labeling it a supply chain risk to national security.
This week, I’m speaking with Shams Alikhan, a senior manager at Surgo Health. She’s the lead of a new study on how young people are using AI for mental health support, which found that 12% of the youth surveyed are turning to AI chatbots to discuss their mental health concerns, with 33% turning to large language models instead of professional support.
A quick conflict of interest disclaimer: I discovered my books are part of a pirated book database called LibGen used to train AI chatbots. I’m part of a class action lawsuit against Anthropic. Let’s jump in.
Shams Alikhan: I’m Shams Alikhan. I’m a senior manager at Surgo Health. Surgo is a public benefit corporation, and our goal is to really find out the “why” behind human behavior. I lead our youth mental health portfolio and our youth mental health work, which is what we’re going to be talking about today.
Stephanie Hepburn: Tell me about the Youth Mental Health Tracker.
Shams Alikhan: The Youth Mental Health Tracker, which is funded by Pivotal Ventures and Paramount Studios, was launched in December of 2024. And really, the goal is to have it be the most comprehensive and actionable mental health data platform that allows policymakers, advocates, researchers—anyone in the youth mental health ecosystem—to get a holistic perspective on the lived experiences of youth.
And so we dive into how youth are feeling. You know, what is their wellbeing looking like? What does it mean for youth to thrive? And we do a deep dive by looking at things such as social connectedness, isolation, belonging, and really getting into those drivers of wellbeing. We just did a survey last fall—it was actually fielded in October and November of 2025—where we specifically asked youth ages 13 to 24 on how they’re interacting with AI. What does AI use look like to them? How is it making them feel? And we also asked them broadly what is their mental health status so we can get a better understanding of how they are engaging with AI and how it’s impacting their daily life.
Stephanie Hepburn: Shams, what did you find?
Shams Alikhan: We found that 12% of that sample was specifically turning to generative AI to discuss their mental health concerns. Now, from this 12%, we also went a little bit deeper to find that 69% of those youth are relying on general-purpose tools that are not designed for mental health support—so ChatGPT, Claude, for example, or Gemini. A small percentage, 21%, is using both general-purpose tools and a mix of mental health-specific tools. So kind of a mix there, but the majority is turning to just general-purpose tools that are not intended for mental health purposes.
We looked at the data a step further as well, and we found that of those that are turning to generative AI for mental health purposes, 41% did not report being encouraged to get professional help. So ChatGPT, Claude, Gemini—whichever generative AI tool they used—didn’t refer them to professional support or human support. It didn’t encourage them to get professional help, crisis services, etc. A third of these youth are using it as a substitute to getting care, meaning this is sort of their endpoint, meaning they are not following up with professional support services and actual human support. Whereas two-thirds of youth reported that they may use it as a complement to getting care.
Stephanie Hepburn: How did they feel about it? Using the large language models to discuss their mental health challenges or a personal challenge that they’re experiencing? How did they feel during and after? Is that information you collected as well?
Shams Alikhan: Yeah, we definitely did. And it’s really interesting because it was mixed. Some youth said that it was great to have someone to listen to. For example, one quote that really jumps to mind: “I felt more disclosure. Honestly, it felt better than getting advice. It was someone to talk to that would help me work through problems in an unbiased way.” So there was some positive feedback.
But then on the other hand, some youth also said it provided temporary relief. One individual said it was like putting a band-aid on a giant wound or a giant gash. Another individual said, “I felt the same. I got that temporary relief, but in the long run there wasn’t really much added benefit.”
Stephanie Hepburn: I’m curious if there are differences between the individuals who are using it in a supplementary way versus those who are using it in lieu of a mental health provider.
Shams Alikhan: Youth with mental health struggles who turned to Gen AI as a substitute to care were two and a half times more likely to report not knowing it was possible to get professional help in the first place. Another significant portion also mentioned that they had a lack of support from their parents and their caregivers. Other barriers that were mentioned were transportation, cost, and insurance. So they’re turning to generative AI as a substitute because they feel as if there is no other option.
Stephanie Hepburn: Were you able to connect people to care or let the youth know what options were available to them? I know that’s different than trying to collect data, but I’m curious if you use that also as an opportunity to provide information and resources for people who might be struggling.
Shams Alikhan: Yeah, that’s a great question. You know, ultimately, our reports and our tool are designed for policymakers, advocacy organizations, community-based organizations, and other mental health experts working in the field. Parents and educators, of course, fall into that as well. So we did work with partners to develop recommendations on what the greater ecosystem can do to help youth get the help that they need. One of the recommendations really falls around developing digital literacy programs to help youth understand when it is important to turn to professional support and human support versus generative AI.
Stephanie Hepburn: So I watched one of the interviews. The interviewee was a 19-year-old named Sydney. And she said, “In order to help teens, we need teens.” She also said if everybody just found one person—one trusted person that they could talk to—that would go so far. Did you find her quotes reflected in the overall survey?
Shams Alikhan: Yeah, we found that, yes, peers are very important to youth, of course, at that age specifically. But their number one source of getting information seems to be parents. Over 80% of youth said that they still turn to their parents as their number one source of information. Now, whether that be for mental health or emotional needs—or more broadly, when you think about current events, what they’re seeing online, how to process information on social media—a majority is still saying they need support from parents. And they’re not necessarily asking parents to just lecture them or talk to them, but to listen more openly, be available to share thoughts, and have an open dialogue.
Stephanie Hepburn: You found that Southern states lead in risk, and that includes Louisiana. I’m in New Orleans, Louisiana. Are you finding additional risk for youth who are turning to AI? Meaning, they are not able to tap into resources as easily. Are you identifying youth in those states as more at risk for turning to AI for full support?
Shams Alikhan: We didn’t specifically look at states when we looked at the pulse survey data around youth and AI. However, there is definitely a correlation. As I mentioned earlier, the youth that are using AI for mental health purposes overwhelmingly cited barriers to care—transportation, cost, etc. And then there’s a huge factor of provider shortages as well. They don’t know where to get the care because there are not that many mental health providers there.
Our Thrive Atlas tool shows you where there is a higher risk for mental health provider shortages, and those Southern states definitely jump out as some of the ones that have shortages—whether that’s psychiatrists, psychologists, or even school-based mental health specialists. They just don’t have the support, so they’re turning to generative AI.
Stephanie Hepburn: In the show notes, I’ll include a link to the Thrive Atlas. Is it powered by generative AI?
Shams Alikhan: Thrive Atlas is a separate tool. Essentially it’s part of the Youth Mental Health Tracker, but it is a powerful county-level tool where we brought in eight different external datasets and aggregated them down to the county level so that we can look at what is driving risk for youth. We look at limited wellness practice, provider shortages, accessibility barriers, socioeconomic hardship, negative life experiences, and limited support and belonging.
Stephanie Hepburn: So it’s an index and analytics tool. Is that right?
Shams Alikhan: That’s correct. We did use AI to a certain extent to aggregate that down and do the analytics to power the tool. A lot of the tools that are out there may provide a state-level overview, but to take eight external datasets and aggregate them down to that level—that’s something that can essentially be done by AI.
Stephanie Hepburn: And much faster, I assume.
Shams Alikhan: Exactly.
Stephanie Hepburn: Shams, were you surprised by any of the findings?
Shams Alikhan: The biggest thing—and I guess it shouldn’t be a surprise, but to me it was—is that 33% are using generative AI as a substitute to getting care because they’re not getting professional help. We also found that younger youth, boys, Black and Hispanic youth, and youth experiencing financial hardships may be more likely to rely on generative AI as a substitute for professional care.
On the other hand, we did find that LGBTQ+ youth were a little bit more skeptical of generative AI in general. They were less likely to turn to generative AI and cited concerns around safety, control, trust, and just general long-term impact to the environment as well.
Stephanie Hepburn: You mentioned earlier that large language models are fairly inconsistent as to whether they provide resources. I found that to be the case. Without standardization, that’s going to continue to be a problem. What are your thoughts there, both on the state and federal level, to ensure people are provided resources?
Shams Alikhan: A potential simple safeguard would include a one-touch button next to the chat bar or attached to the chat interface itself that will connect the user directly to human support services—988 text-based crisis services, NAMI, etc. Having that front and center would be ideal.
Secondly, what’s really important is to co-create these resources. Include policymakers, youth, parents, and mental health experts to get a better understanding of what would help. When we create these task forces, that’s ultimately the best way to develop these safeguards. Youth want to be a part of the conversation; it’s our job to help them.
Stephanie Hepburn: So in developing the surveys related to AI, you have a youth task force that helps you develop those questions?
Shams Alikhan: We have a Youth Advisory Board that we meet with almost on a monthly basis. Even before we create the survey, we talk to them about what topics are relevant. That’s what makes the tracker unique—this is data from youth, about youth, that was co-created with youth.
Stephanie Hepburn: Were there specific concerns or questions they felt were important to include?
Shams Alikhan: I think their main concern is reflected in our data. Generative AI is everywhere and developing rapidly. So, how can this be used correctly? What does safe use look like? The conversations have gone from looking at social media in isolation to now looking more broadly at the digital world, which includes generative AI.
Stephanie Hepburn: One thing I thought was interesting when I was chatting with Gemini: it said it “cared” about my well-being. There’s an increase in anthropomorphizing AI. Is that problematic for youth?
Shams Alikhan: I think it’s definitely problematic in the sense that youth are using this in lieu of getting other support services. The more we humanize these tools, the more it’s going to seem like they’re talking to a human, and they’re going to defer away from getting additional professional support.
Stephanie Hepburn: So you don’t want a young person to develop a parasocial relationship with AI. You’re saying it can actually make it more challenging for the person to want to turn away from the AI and toward human connection.
Shams Alikhan: Exactly. Professional support is still ultimately going to trump these tools when it comes to mental health. We definitely don’t want to deter that.
Stephanie Hepburn: Shams, what’s next?
Shams Alikhan: We’re doing another national big round of data collection to compare results from our first national survey. We’d like to see where the trends are—what has improved, what has gotten worse? Generative AI will definitely be in there because a couple of years ago, this was not even a conversation. That will be released later in the fall.
Stephanie Hepburn: Thank you so much for coming on, Shams.
Shams Alikhan: Thank you for having me.
Stephanie Hepburn: That was Shams Alikhan on how young people are using AI chatbots to address their mental health concerns. I’ll include links to the Youth Mental Health Tracker and the Thrive Atlas in the show notes.
During this episode, I likened the lack of transparency of AI companies regarding mental health data to the “black box” of 911 call data—specifically calls that could have been diverted to 988. AI companies are sitting on large-scale data that could help us better understand the scope of this issue. In October, OpenAI released that around 0.07% of ChatGPT users active in a given week exhibit possible signs of mental health emergencies.
I’m your host and producer. Our associate producer is Rin Koenig. Audio engineering by Chris Mann. Music is “Vinyl Couch” by Blue Dot Sessions. If you enjoyed this episode, please subscribe and leave us a review. Thanks for listening.
References
Anthropic vs. U.S. Defense Department
How Anthropic Became the Most Disruptive Company in the World
Dario Amodei — The Adolescence of Technology
Judge Stays Pentagon’s Labeling of Anthropic as ‘Supply Chain Risk’ — The New York Times
Defense contractors, like Lockheed, seen removing Anthropic’s AI after Trump ban — Reuters
Youth Mental Health Tracker — Surgo Health
Strengthening ChatGPT’s responses in sensitive conversations — OpenAI
Where to listen to and follow ‘CrisisTalk’
Apple | Spotify | Amazon | iHeart | YouTube
We want to hear from you
Have you turned to an AI chatbot to discuss an interpersonal or mental health issue? Work at an LLM AI company and want to share your views? We want to hear from you. Reach us at editor@crisisnow.com
Credits
“CrisisTalk” is hosted and produced by Stephanie Hepburn. Our associate producer is Rin Koenig. Audio-engineering by Chris Mann. Music is “Vinyl Couch” by Blue Dot Sessions.

