Connect with:
Monday / May 13.

Dr. Dennis Morrison on Mental Health Technologies and Augmented Intelligence

Mental Health Tech and AI
Share

Stephanie Hepburn is a writer in New Orleans. She is the editor in chief of #CrisisTalk. You can reach her at .​

Dr. Dennis Morrison says the mental health and substance use disorder fields are often slower to incorporate technological innovations. “What’s cutting edge for us is often more mature in business and medical healthcare.” However, he notes the gaps are shrinking. 

Morrison, who has served as the Chief Clinical Officer for Netsmart Technologies and CEO of Centerstone Research Institute, believes the divergence between medical and behavioral healthcare is partly because of differing relationships with technology. “The medical profession regularly uses technology in their care of patients while we, historically, haven’t — that makes them more technophilic and us a bit more technophobic.” In medical care, technology is often used to help diagnose a person, while Morrison says what’s critical in behavioral healthcare is the relationship between the provider and the client. “The treatment is the talk in behavioral healthcare.”

Ready or not, many behavioral health providers were propelled into technology through telehealth during the Covid-19 pandemic. Regulatory and policy changes removed long-standing delivery barriers. While many celebrated the change, Kristin Neylon, senior project associate at the National Association of State Mental Health Program Directors Research Institute, told #CrisisTalk in 2021 that others were initially wary — they worried about confidentiality and privacy. However, she shared they soon discovered they liked the technology and it “allowed them to reach people they once couldn’t.” 

Another hurdle is funding. Morrison points out that behavioral health is often perceived as soft science with hard-to-track data and outcomes. In 2021, Ken Zimmerman, CEO of Fountain House and co-director of S2i, told #CrisisTalk that philanthropic groups have been hesitant to invest in behavioral healthcare innovations. “They’ve been unsure what one can do to change the mental health system,” he said. “It’s a big system with many new acronyms.” He helped create S2i, a disruptive think tank that can speak the languages of philanthropy and mental health, helping to bridge the gap between them. (Philanthropies like the Huntsman Mental Health Foundation in Utah and the Horizon Foundation in Maryland have played critical roles in behavioral health system change, including technologies.)

Morrison highlights that 988 and Covid-19 have helped spark interest in and support of technological innovation in the behavioral health field. That said, research began long beforehand, with experts investigating how to use technology to overcome barriers and improve mental healthcare access. Among them is Dr. John Torous, director of the digital psychiatry division at Boston’s Beth Israel Deaconess Medical Center. He’s researched digital phenotyping technologies, which he calls “characterizing the lived experience,” and those focused on interventions and remote delivery of care. In 2019, he told Dr. David Gratzer that increasingly people have smartphones and wearable devices, which has allowed researchers, with permission from users, to capture data on how people experience mental illness. “What are their symptoms in real-time? What does their sleep look like? What is their physical activity,” asked Torous. 

There are mobile health apps that provide users with information on many aspects of well-being, including one developed by Shinichi Tokuno, a professor at the University of Tokyo, and PST Inc. that, with user consent, analyzes vocal cord vibration. The app identified changes in people’s stress levels in Japan in response to Covid-related events. Mobile health apps give users, researchers and clinicians data on key issues — like how sleep impacts physical activity and mood. “We can begin to understand the functional outcomes beyond just asking, ‘How are you feeling?’” Torous told Gratzer.

That doesn’t mean clinicians should no longer ask how someone is sleeping, points out Morrison. Torous agrees, sharing with Gratzer that he continues to ask about his patients’ sleep. He says the mobile health data can help flesh out what’s happening, whether the person is actually falling asleep at 2 a.m. when they believe they’re falling asleep far earlier or if the person is recorded as sleeping but is experiencing insomnia.

Technology, highlights Morrison, is designed to augment, not replace, care. “I hope organizations stop using the term artificial intelligence and replace it with augmented intelligence,” he says. That’s what the American Medical Association encourages as well, stating that augmented intelligence “focuses on AI’s assistive role.”

How far this assistive role should go is a matter of much debate. Earlier this year, the mental health tech company Koko faced backlash when it used GPT-3 to provide emotional support to around 4,000 people. Rob Morris, Koko co-founder, shared outcomes from the experiment on Twitter, now X. Koko is an anonymous self-help community that can be found on platforms like Discord — it allows people to send positive responses to those who seek them and the Koko bot gives users tips on how to reframe their thoughts. 

During the experiment, those participating in Koko on the Discord platform could use OpenAI’s chatbot to, at least, in part, write their responses to people reaching out to Koko for emotional support. Dr. Camille Nebeker, a University of California, San Diego, professor and executive director and co-founder of the Research Center for Optimal Digital Ethics, told NBC News that Koko failed to get informed consent. “Informed consent is incredibly important for traditional research,” she said. “It’s a cornerstone of ethical practices, but when you don’t have the requirement to do that, the public could be at risk.” However, Morris told #CrisisTalk the GPT-3 function was not only an opt-in for message creators but also recipients, who could “choose if they wanted to read or ignore any messages labeled as ‘co-written by Koko Bot.’” He emphasized that before receiving their own responses, users were also shown how they might help others with assistance from GPT-3.

Determining what mobile health technologies are effective and helpful can be challenging. That’s why Torous has worked with the American Psychiatric Association to develop the App Evaluation Model, which examines an app’s access and background, privacy and safety, clinical foundation, usability and therapeutic goal.

Technology can also help tackle resource and workforce gaps. Morrison notes that having a robust AI algorithm has allowed Crisis Text Line (text “talk” to 741741), a 24/7, free and confidential behavioral health texting service, to review incoming texts, allowing counselors to prioritize texts by severity, not necessarily chronology. “The machine learning algorithm that we’ve built helps us triage our cue,” Jana Lynn French, director of Community Partnerships at Crisis Text Line, told the 988 Jam community on May 24. “It’s learned the words people use when they’re at most imminent risk.” The nonprofit, a member of the 988 network, has found that lethal language is predictive. For example, in 2019, Crisis Text Line shared that the emoji pill, the crying face emoji, 800 mg, Excedrin and Ibuprofen had been more likely than the word “suicide” to lead to high-risk conversations.

Morrison shares how augmented intelligence is also helping clinicians. He points to Eleos, an AI personal assistant that collects information during therapeutic sessions. (He’s on Eleos’ board of directors.) The AI engine gives clinicians information that can be used in their electronic health record notes. He describes the experience of a clinician who’d been seeing a client for a while and said the platform identified terms the clinician hadn’t. “Eleos picked up that the client mentioned grief six times in the session — that’s something the clinician hadn’t caught,” he points out. “The information gave them additional insights.” 

No matter how good the augmented intelligence, Morrison notes that it can’t be, on its own, a decision-making tool. “These technologies aren’t meant to supplant provider judgment but to provide tools that help us become better providers tomorrow than we are today.”

Discover more from #CrisisTalk

Subscribe now to keep reading and get access to the full archive.

Continue reading