What happens when generative AI is used to train the crisis counselors who answer mental health crisis lines? In this episode, Stephanie Hepburn speaks with Sam Dorison, co-founder and CEO of ReflexAI, the company creating AI simulations to do just that.
Transcript
This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email with any questions.
Sam Dorison: For me really personally and professionally. The core of how I look at AI tools is if it were a member of my family or one of my closest friends using it, what would they deserve? And we have an extensive set of AI ethics principles and actions internally that we’ve developed with an independent AI ethicist. So we’ve invested really heavily in this and and have thought a lot about it. Essentially every single day. And that’s the lens I see things through.
Stephanie Hepburn: This is CrisisTalk. I’m your host, Stephanie Hepburn. Our episodes so far have focused on people using AI chatbots for mental health support. But what happens when generative AI is used to train the crisis counselors who answer mental health crisis lines? Today, I’m speaking with Sam Dorison, co-founder and CEO of ReflexAI, the company creating AI simulations, to do just that. Let’s jump in.
Sam Dorison: I’m Sam Dorison. I’m one of the two co-founders and CEO at ReflexAI.
Stephanie Hepburn: Can you tell me a little bit about the impetus? Why? Why did you start ReflexAI?
Sam Dorison: My co-founder and I started reflex really because we had experienced a lot of the challenges that crisis line operators have. So we were executives at a crisis line for several years. And for those not as familiar with the crisis line landscape, this is really just an amazing innovation. Over the last several decades in the United States, that there should be a free 24/7 line for anyone in the country experiencing mental health challenges, including thoughts of suicide. And historically, this was much more fragmented. There was also a national number, but a few years ago, with the launch of 988, which was a shortcode number across both phone and text. This is now available much more easily to everyone across the country. And when my co-founder and I were leading one of these centres and leading some of this work, one of the things that we really experienced was the difficulty in training new crisis responders, the folks who are on the phone every day. And we had done a lot of work up to that point with engaging eLearning. How do you make sure that folks can internalize the knowledge that they’re learning as quickly as possible to do this work? But a real bottleneck for us was role plays. And you want someone before they get on the lines to have many opportunities to practice the different types of scenarios that they might encounter.
Sam Dorison: In the same way that you want an airline pilot to be able to practice a wide range of situations that they might encounter in a flight simulator. So this was 2018, early 2019, and we started working on some of the earliest generative AI models in partnership with Google to provide this simulated training and this realistic training to crisis counselors. We did that first at the Trevor Project, where John and I both worked. It got incredibly good reviews from trainees. It got great reviews from trainers, from quality assurance teams, from others. And we realized, okay, we’re really on to something here in 2021, that technology and that use case was on the TIME 100 Best Inventions of the year as the first generative AI use case on the time 100. And then we launched reflex in 2022 to bring training and quality assurance tools to as many organizations as possible. And we’ll we’ll talk about this in some depth together today. But the most important thing, from my perspective, is that these tools are deployed in ways that work with the missions of the organizations that we partner with.
Stephanie Hepburn: So can you tell me a little bit about what that means? Maybe you could give an example of organizations that you’re working with and how these chatbots can be tailored. For example, one of the chatbots that I tested out was named Blake, a 35 year old veteran from Colorado Springs who was struggling with mental health challenges. His friend had just died. And so that was part of the contextual backstory. Can you tell me a little bit more about Blake and why you developed this particular persona?
Sam Dorison: Before we started, Stephanie, we try to talk in terms of simulations versus chatbots. That’s just the preferred language because chatbots are like generic ChatGPT Gemini Claude versus simulations are very honed in specific. So if you hear me, I tend to talk simulations versus chatbots. When we talk about simulations, what we mean at reflex are the really well-honed personas that are meant to provide participants with practice on a particular situation that they will encounter in their jobs. So each of the simulations has a unique backstory, a way of interacting, very specific types of responses, very specific personality traits that come across in the interaction. So whenever we talk about simulations or simulation personas, it’s referencing those particular configured practice environments for our partners versus I think when people think chatbots, they tend to think tools that can be used to ask a wide range of questions, go in a wide range of dimensions. And of course, in the same way that if you’re a Delta pilot practicing in a flight simulator, that simulation has a very particular purpose to it and a very particular set of conditions. It’s not just a flying video game. We think about the same thing in the simulation world, where each of the simulations that our partners are using are particularly deployed and configured for their unique use case. We configured Blake as part of a peer support training that reflex launched in partnership with Google.org. That’s available to all veterans across the country to practice mental health conversations with other veterans. And this came out of our work with the VA and a lot of our early partnership there, which is one of ReflexAI’s biggest partners. So just for a bit of context, reflex supports the VA and several areas of training, including Veterans Crisis Line, their chaplain service, the Vet centers, the Office of Suicide Prevention, and in particular our work with veterans.
Sam Dorison: Crisis line involves training hundreds of VA staff members a year who will answer calls 24/7 from veterans in crisis, that there are over 1200 responders answering calls, chats and texts at Veterans Crisis Line, and the expectation is that they will be available and well trained and prepared to serve every single veteran that reaches out. An incredibly important and and noble goal. And as part of that, they go through about 16 different simulations as part of training. And as you referenced, Stephanie, they all have different backstories. They all have different sets of experiences, whether that’s their military experience, their age, where they live in the country, their religious traditions, their support networks, their suicidal ideations, everything that would be important for a crisis responder who is talking to that veteran to be prepared to engage with. And these are all quite different. Their responses to questions will be different. Some will be more willing to share, some will be less willing to share information. And this is important because that’s how it is in the real world. And so the experience that you had with Blake. Blake is a much shorter simulation because it’s in a peer support context versus a professional setting. But the experience you had with Blake is an accurate one, where you start to uncover a bit of Blake’s backstory. You start to uncover some of the motivations and emotions that Blake has. And in doing so, hopefully you got better at some of your skills of talking to someone who might be going through a crisis.
Stephanie Hepburn: And what I think is so cool is it’s designed for peers, for veterans specifically. But anybody can go on there. So anybody who has someone, a veteran in their life can go, whether that’s a family member, whether that’s a friend, a loved one can go on the app and it helps them to understand how to have these challenging conversations. Why? Why did you make it open to the public?
Sam Dorison: We made it open to the public. So my co-founder and I are deep believers And technology for good. The reason that we launched Home Team the way we did is we feel a real obligation and responsibility for our technology to be used for good, and veterans who have served our country have a 50% higher likelihood of dying by suicide than non-veterans. And for us, that’s an area of responsibility that we feel as citizens of this country. And we took the time, the energy, the resources in partnership with Google to provide this training for veterans in their communities at no cost.
Stephanie Hepburn: I love that it’s so accessible and that somebody who has a loved one who might be navigating a mental health challenge, whether it’s depression, anxiety, suicidal ideation, that they can come on and practice. And one of the things that we talked about is this helps with role play and practicing skills, but that’s for peers as well as people who are developing the skills to be on the crisis lines. Those on the crisis lines have to have so much more training. What does that look like? How’s that different for somebody who’s prepping to be a 988 crisis counselor.
Sam Dorison: In our work with 988 centers? And I think we now work with greater than 10% of the network across the country, over 20 centers. And what we hear from them is we have some really great content that we’ve developed, meaning we meeting the centers or that they’ve received from vibrant, which is the administrator of the network. And that content is very effective for teaching the expectations of skills, whether that’s the completion of a risk assessment, the use of empathy, the safety planning requirements in those conversations. But what that does not involve is the level of practice that you would want to actually deeply internalize those skills. And historically, What that’s meant is that these centers do a lot of manual practice, and that means trainees practicing with each other, practicing with very busy trainers, pulling experienced counselors off the lines to practice, and then also needing to get feedback from those people. So our work with the centers, one of the areas that we work with primarily is in training and deploying really detailed training simulations that represent a wide range of types of calls that you will experience in 988. And then immediately after the simulations, you get very detailed feedback on what went well, what could go better. So you can try it again and continue to improve. And we know intuitively, of course, that real world practice is valuable. I think that’s not up for debate in a wide range of sectors. But in the research that we’ve done and we just we published this with one of the centers that we work with, individuals who went through eight or more simulations as part of training they were more effective at completing risk assessments 18% of the time, and their ability to build trust and communicate empathetically was 16% better. So there’s a huge impact of I get to practice my skills before I do this work. And that impact is across both the kind of technical hard skills like risk assessment as well as some of the things that are considered soft skills like empathy or open ended questions.
Stephanie Hepburn: When I was playing around with the persona Blake, what I thought was interesting is just even knowing how to start a conversation, starting a difficult conversation, then also having an open ended question at the end to keep the conversation going, keep the person engaged. These seem very intuitive, but when you’re talking about mental health crisis, it can be so hard and we have so much stigma that’s baked into our society that it was great just to even see examples of how to do that. So I imagine that’s instrumental for the 988 crisis call centers as well.
Sam Dorison: It’s absolutely instrumental in 988. And I’ll give you another example. We all know that when we are building rapport and building trust, it’s helpful to be specific and play back what you’re hearing, right. But you get on a call with someone in crisis, and it can be very easy. Even as someone who listens carefully and is empathetic to default to something like that sounds really hard, which is more generic versus something that’s much more specific, like it makes total sense that you feel lonely when your friends don’t text you back. A much more specific, empathetic statement, or a lot of people in your situation would struggle when their friends don’t text them back, right? Something really specific and nuanced. And a lot of the practice, as you mentioned, with one of the simulations, but even more generally, is allowing folks the ability to learn how to respond in ways that are specific, that build trust, that communicate empathy when it’s not easy to do, when you’ve never met the person before. And one of the magic pieces of 988 is you are connecting with someone that you’ve never connected with before. You might never connect with them again, and yet you can have an enormous impact on their life just by being there for them.
Stephanie Hepburn: One of the debates in these conversations that I’ve been having is whether a conversation should be halted or continued, and I’ve noticed the broad use AI chatbots. They’ve kind of ebbed and flowed in their responses. There have been times where many of them halted the conversation. There would be a pop up that would provide resources, but they could no longer enter text within the chat. So it stopped the dialogue. And right now it seems that most of the broad use ones are continuing to allow dialogue. Some don’t ask any open ended questions, some continue to ask open ended questions. I realize that what you’re doing is different. What you’re doing is creating these personas for a specific use, which is to help train crisis counselors, or, in the case of Blake, for example, to help train peers and loved ones on how to interact with somebody with a specific set of experiences, how to help them navigate crisis and provide resources. So I realize the distinction there, but I’m curious, what are your thoughts in terms of maintaining dialogue or not maintaining dialogue?
Sam Dorison: For me, really personally and professionally, the core of how I look at AI tools is if it were a member of my family or one of my closest friends using it, what would they deserve? And we have an extensive set of AI ethics principles and actions internally that we’ve developed with an independent AI ethicist. So we’ve invested really heavily in this and and have thought a lot about it. Essentially every single day. And that’s the lens I see things through. So when I think about what we do and then to circle back, Stephanie, to your precise question on this too, which I think is a fantastic one. When we think about our tools for training and for quality assurance, we believe that if it were someone in our family calling 988, we would want them to talk to a human who’s exceptionally well trained. And that means they’ve had a lot of practice. They’ve also gotten feedback on how they’ve done, and they’ve continued to improve. And we would want 100% of the calls that crisis responder is answering to be looked at for quality assurance and continuous learning. And that’s the other piece of what we do for quality assurance is analyze 100% of interactions so we can help center see where their strengths are and where they can continue to improve. As I think about chatbots generally, it really boils down to if it were a member of my family or one of my closest friends using it, what would I want for them? And I would want them to have resources surfaced when there might be a situation where that would be beneficial.
Sam Dorison: I would want a chatbot that communicates certainly empathetically with them, but also helps them, if relevant. See the promise of the future. And I think where we see a lot of the challenge is these chatbots are used for a lot of different things, and they can respond in a lot of different ways. So the question on, do I think the conversation to technically continue or should pause? I think it would depend on what’s happened in the conversation to that point. But what I would say is, just like when you’re talking to a human right, you would want them to be able to provide you with resources, provide you with what’s responsible, that comes next. I think that’s an absolute no brainer. And technically that requires obviously a lot of work on behalf of these model research labs to be able to do that. But the ones that do it better are certainly situations where they’re thinking through, if I was on the other side of this, or someone from my family was on the other side of this, what would they deserve?
Stephanie Hepburn: Yeah, and I think that’s such a great point. I’m not sure they’ve settled on that. You know, I’m not sure that they’re perhaps interacting with the experts who could help them navigate what might be the best approach. Or maybe they are, and maybe there’s just simply not enough data yet to say whether a conversation should be halted or not halted. And people have very strong opinions on it. I’ve had some experts tell me that’s the ultimate safety mechanism is a pop up, says text dial chat 988, But then other experts are stating, well, yeah, but if they’re not already going to do that, then you’re halting dialogue. Then what? You know, so I think it’s a nuanced discussion. And what I’m seeing with the broad use chatbots is it’s ebbed and flowed so much, probably because they don’t really know internally because they’re providing really at this point, we don’t have federal regulation of AI. I think the result is that companies are determining themselves what those internal guardrails look like, and that changes because they’re probably bringing in new information and trying to figure it out. But what I’m seeing predominantly is now some mix of almost universally, not always.
Stephanie Hepburn: Unfortunately, as one of my next episodes will illustrate, not always are resources provided. But what I’m seeing is increasingly that is true. And there’s a pop up. And now most of the broad use ones are also allowing for continued dialogue. So I’m not sure these companies have settled on that yet. I love the perspective that the leading ethos and the leading idea behind that should be, what do I want a loved one to be able to experience? If they’re navigating this and they reach out to an AI chatbot at two in the morning because they cannot find a safe space with their friends, family, their mental health provider. And this is what they feel at the moment, is the best outlet. I love that perspective that it should be designed with that in mind. And so going back to your original point of how everything has to be really tailored, for example, you were talking about for veterans specifically, can you give some other examples of how ReflexAI tailors these personas for specific populations?
Sam Dorison: One of the centers we work with in Oregon Lines for life. They also operate a youth line and that is staffed by teens after school and on weekends, so that youth calling can talk to someone who’s been in their position more recently. And that has meant deploying simulations that are really nuanced. Youth personas so that the trainees on youth line can practice them. It also means, though, that for the adults who are answering the youth line at other hours where the kids are in school, or it’s overnights where they should be sleeping before school the next day, that those counselors have extensive practice talking to youth and young adults. That’s just one example. We’ve also done a lot of work on natural disasters and in some areas of the country, and this has been very clear in the news over the last year, if not much longer, that natural disasters are a major source of stress and often come up in conversations with crisis lines, which means depending on where you’re supporting callers in the country, there will be different natural disasters that they are encountering with different long term effects. And the call takers in those regions should have experience answering those types of calls, and also being able to suggest additional resources beyond 988 for individuals calling with those challenges. So there can be geographic nuance. There can be nuance by age. There can be nuance by urban and rural communities, a wide range of nuances.
Sam Dorison: One of the nuances that’s really been universal across the country over the last 18 to 24 months, analytically, is the increase in third party callers. And, Stephanie, I can define this if it’s helpful to. We think about first party callers. This is I am struggling with my mental health, so I’m calling 988 to talk about my challenges. You can think about this in 911. This is the equivalent of I’m feeling a lot of chest pain and chest pressure. I’m calling 911 because I might be having a heart attack. First party callers. You’re calling about yourself. Then there’s third party callers though of you’re worried about someone else. So this is hey, my spouse is feeling a lot of chest pressure and I think they might be having a heart attack. And there’s an analogy on nine, eight, eight for this too, of you’re calling because you’re worried about a friend’s mental health, a colleague’s mental health, one of your kids mental health. And these calls have really grown since the launch of 988. And to be clear, this is a good thing. This means that more folks are comfortable reaching out, and the person who might not be able to call or isn’t aware of 988. They can still benefit from support because someone in their life is calling. Those types of third party calls require a complimentary but slightly different skill set than a first party call, because you’re not just assessing the person calling, although you should also assess them.
Sam Dorison: You’re also assessing someone that you’re not talking to, and they might have a particular perspective on their friend or their colleague or their parent or whoever they’re calling about, which you need to understand in order to think through next steps with them. So the rise of third party calls has happened nationally. And one of the things that reflex did in response to this, we heard this from a lot of our partners. We saw this in conversations with essentially 100% of them was we launched a training in particular for third party callers. It included content. It also included three simulations of third party callers that could be practiced by not just new counselors at these centers, but all counselors. For us, this is a this is incredibly important because 988 is meant to address an enormous mental health challenge and public health crisis in this country. And part of that is supporting and being there for the people who are supporting others. And in order to do that, the counselors deserve to be super well trained on how to do that. These centers deserve to have the training that can enable that, and the people calling deserve to have that high quality support. So again, it really comes back to our lens of what do people deserve? And whether it’s the centers, the individuals calling, the individuals answering the calls, they all deserve high quality training and high quality support for third party calls.
Stephanie Hepburn: For me personally, I thought it was so interesting as somebody who is not a mental health professional, to navigate a conversation with Blake. Are there other websites or platforms or personas that you’ve worked on for somebody like me or my listeners who are not mental health professionals, who are not going through training to be a crisis counselor, but are interested in doing this kind of work just to interact with those in our life.
Sam Dorison: I love that question, Stephanie. And certainly we talk about home team as being focused on veterans. It is intentionally available to everyone. So to any of your listeners who want to try it, they are text based simulations. They’re not the voice based ones that we often use with our other partners. But please see that as available to anyone listening to your podcast and interested in this work. I would also say a lot of crisis lines across the country do have volunteer programs, and that was how I first got involved as a crisis sponsor was as a volunteer, because I believe in giving back. And part of that meant was volunteer service. So to the extent that any of your listeners are also thinking, I’m not a crisis counselor myself, I’m not a mental health professional, but I’d love to build these skills and potentially support people in crisis. I would say you can find numerous 98 centers, including Crisis Text Line nationally that have volunteer programs that you can join. So those are two pieces we don’t deploy general training simulations out into the world. Beyond our commitment to veterans through home team and our commitment to our partners across the country. But that doesn’t mean that the tools that we have configured are not available. They’re available through home team, and they’re also available through dozens of 988 centers.
Stephanie Hepburn: Are there any personas that you’ve developed that are just your favorite?
Sam Dorison: Some of my favorite personas are some of the hardest. They’re the ones that folks do near the end of training. We have several that are in some way inspired by their curmudgeonly grandfather and up, which I which I know is a very, maybe unexpected reference. And I think those are some of my favorite, because we all have someone in our life who’s not inclined to open up that if you ask them, hey, how have you been? Oh, I’ve been fine. You know anything you want to talk about? No. Not really. And the ability to build trust with someone like that is a skill. And we see when people engage with simulations like that, they build that skill, they get better at it. They get better at validating emotions. They get better at asking open ended questions. They get better at being there for the other person on the other person’s terms. So I think those are some of my favorite. I think some of my other favorites, though, Stephanie, are particular anecdotes within simulation. So we have one example simulation. Young adult just moved to a new city for work, living alone for, I believe the first time, if not the second time. And we know a lot of folks in this country move for work and often move to places where they might not have a great support network. And it turns out the simulation, as you do some digging after work or after soccer practice, will often cook to decompress. And we had one trainee have a really deep conversation with this simulation about what he likes to cook, and that really built a lot of trust that could then translate over to harm reduction and safety planning and other mental health concepts, and it speaks to the dynamics of the simulation, because 99% of users of that simulation are not going to have a deep conversation around cooking. And that’s okay. They’re going to build trust. Talking about soccer or talking about basketball or talking about the mental health struggle.
Stephanie Hepburn: I mean, it’s just reminding us leaning into somebody’s interest can help them to open up.
Sam Dorison: 100%.
Stephanie Hepburn: Going back, what safeguards do you put in place to ensure because these models change so rapidly, how do you ensure, let’s say, Gemini changes a little bit. There’s a new iteration. What do you do to ensure that those changes don’t affect the safeguards that you already had in place?
Sam Dorison: The first thing that we do, and this is really important, is you have to decide what’s important to you before you can talk about the safeguards that it can be really tempting to just start with what safeguards do we need? But what you need to start with, in our opinion, is what are the specific risks, and then how do we map safeguards to them? One of the biggest things that we do is actually beyond training. It’s quality assurance, where a center, instead of reviewing 3% of conversations manually, which is really the max that they can do and is the requirement within nine, eight, eight, is to look at trends across 100% of interactions. And one of the dimensions that they look at super closely is the risk assessment and effectively asking about suicide. And a great suicide risk assessment question is something like I really hear that you’ve been struggling for a while, so I want to check in. Are you thinking about killing yourself? That question is neutral, direct, non-judgmental, and grounded in context. Really important parts of a risk assessment. And that’s in contrast to a question like you wouldn’t think about trying to kill yourself, right? Which is not neutral. It’s not totally direct. It’s definitely not non-judgmental. And it’s not grounded in context.
Sam Dorison: So when we think about the AI guardrails on something like the risk assessment scoring dimension, again, a very deep dive. To answer your question, it is important that we build into it the ability to differentiate clinically appropriate risk assessment questions with inappropriate or off protocol questions. So it’s not just did you ask about suicide yes or no, but did you do it in a way that aligns with the organization’s expectations? And regardless of how that is assessed on the back end, with various models looking at various components of that, no matter what the model updates are that happen. We have an obligation to our partners to get it right so that when they look at high performing risk assessment statistics and look at performance on their team and how that’s impacting the caller’s long term. They have a really clear view of it. So the guardrail there is not around what’s appropriate or inappropriate from a within the conversation itself, because we don’t have chatbots that are talking to individuals in crisis, but it’s about performance of the tools. And for us that has meant investing really heavily in what are the expectations of a risk assessment question for our partners, and how do we align the models with that expectation?
Stephanie Hepburn: So while you were talking, one thing I was thinking about is when you were giving the examples on the best way to ask these questions, that makes the person understand the intent behind it, which is care, love. Sometimes when we ask questions, that’s not how it comes across, even if that’s our intention. So I just even that tweak in asking it in a non-judgmental way and in, in a way that is making it clear that you’re asking, are you having these thoughts as opposed to leading with, you’re not having these thoughts, which means there’s really only one answer that’s acceptable. So for laypeople, for my listeners who are laypeople, what is the one thing or 1 or 2 things that you recommend when you identify somebody might be struggling? What are your top recommendations?
Sam Dorison: I think there’s probably 2 or 3 things. One is you can be there for them without knowing exactly what to say. I think there’s a human nature to thinking, I don’t know what exactly what to say, so I should just say nothing. But actually saying I don’t know what to say, but I’m here for you. Or I don’t have the exact right words, but I’m really glad you mentioned that. And just be a human with them in a way that it’s much easier just to suggest that you do, right? But I’ve been in this situation myself where I don’t know what to say. And the best thing you can say is I don’t know what to say. That really sounds shitty. What you’re going through. But I’m glad you mentioned it. That’s the first thing, is be a human and talk human to human. The second thing is there’s enormous amounts of evidence that asking about suicide does not increase likelihood of a suicide attempt. So asking someone, hey, it sounds like you’ve been struggling for a while. Are you thinking about killing yourself? Or. This might be a tough question to answer, but I wanted to ask because I care. Are you thinking about suicide? You will not increase their likelihood of a suicide attempt by asking about suicide. Many times you will decrease the likelihood just by the destigmatization, and they will often open up to you in a way that lets you support them in a way that they would not have if you did not ask. So be a human in these conversations. Even if you don’t know exactly what to say And be comfortable asking about mental health and asking about suicide in particular, because all of the clinical evidence suggests that is the right thing to do.
Stephanie Hepburn: Thank you so much, Sam, for joining me.
Sam Dorison: Stephanie, thank you as well for all of the work that you do with your listeners on mental health on 988 on AI. It’s a real pleasure to have this conversation, and I’m looking forward to continuing the conversation.
Stephanie Hepburn: That was Sam Dorison, co-founder and CEO of ReflexAI, which creates generative AI personas to help train crisis counselors as well as veterans on having challenging mental health conversations. If you enjoyed this episode, please subscribe and leave us a review. Wherever you listen to the podcast, it helps others find the show. Thanks for listening. I’m your host and producer. Our associate producers, Rin Koenig. Audio engineering by Chris Mann. Music is Vinyl Couch by Blue Dot sessions.
References
Flight Simulators for Mental Health: How AI Is Helping Train Crisis Counselors
Home Team App — Mental Health Training for Veterans
Increased Practice Simulations and Feedback Results in Stronger 988 Counselor Performance
Where to listen to and follow ‘CrisisTalk’
Apple | Spotify | Amazon | iHeart | YouTube
We want to hear from you
Have you had experiences with an AI chatbot you want to share? We want to hear from you. We are especially interested in whether, during a conversation, the chatbot has ended the chat with you and/or displayed a pop-up with mental health resources. Email us at
Credits
“CrisisTalk” is hosted and produced by executive producer Stephanie Hepburn. Our associate producer is Rin Koenig. Audio-engineering by Chris Mann. Music is Vinyl Couch by Blue Dot Sessions.

