AI in action: Enhancing suicide risk detection in behavioral health

From automating administrative tasks to clinical decision support, the use cases for artificial intelligence in healthcare continue to multiply. One unique approach is the use of natural language processing to gauge the risk of suicide among behavioral health patients. 

At the same time, rates have been rising, according to estimates from the Centers for Disease Control and Prevention. In 2022, suicide claimed more than 49,400 lives in the U.S. Just a few years before that, its impact was estimated to cost the U.S. more than $1 trillion. The signs of suicidal ideation can be difficult to detect; the majority of people who die by suicide had visited their primary care physician in the year leading up to their death. Some warning signs are missed even the week before their death.

Earlier this year, NeuroFlow, maker of behavioral health integration software, published the results of a study that found NLP software could discern possible suicidal ideation in more than half of patients that may otherwise have gone undetected.

The study analyzed free-form text entries in journaling prompts submitted by 425 users of NeuroFlow’s engagement platform. Patients were selected if they had been flagged by NLP as having expressed suicidal ideation. 

While 81% of participants were compliant with their PHQ9—a questionnaire that screens for depression and suicidal ideation—of those, nearly half had not indicated suicidal ideation on their most recent assessments. 

The other 19% did not complete a PHQ9 at all, a missed opportunity for detection. In total, 58% may not have been identified as being at risk of suicide without NLP.

The study leveraged keyword detection from a suicidal ideation lexicon. After the NLP flagged a patient, clinicians reviewed their text entries to further refine the model. NeuroFlow had a protocol in place to equip users with crisis resources and interventions within predefined time frames. Because NLP can be deployed remotely and at scale, it is well positioned to support marginalized communities where suicide rates are highest, psychiatric resources are scarce and the social determinants of health persist, the study argued.

A range of digital health companies today are harnessing NLP in different ways to catch those at risk and direct them to resources, including human interventions. Fierce Healthcare talked to several to understand the different approaches and how far along they are.
 

Proactive crisis intervention
 

NLP combined with human intervention can help keep patients in the appropriate level of care, NeuroFlow argues. In 2023, it started to test this with Emory Healthcare, rolling out its tech across the health system’s primary care clinics in a pilot. The goal was to support the delivery of psychiatric services. Before the partnership, Emory had not had a patient-facing app to support behavioral health, executives previously told Fierce Healthcare. 

“There is just an insufficient mental health workforce to attune to all of the needs of our patients,” said Brandon Kitay, M.D., director of behavioral health integration at Emory Healthcare. “Technology is affording us a greater window of observation into human behavior.”

In fact, care integration had not been driven by technology at Emory until the health system launched its collaborative care model. The approach embeds behavioral health specialists in primary care clinics to provide on-site psychotherapy and coordinate care. With the addition of NeuroFlow, patients get support between appointments with self-guided resources. 

NLP can flag a patient at-risk journaling in NeuroFlow and push crisis resources, Kitay noted, even when a practice office is closed. The app will also generate an alert that gets sent to their care manager, who can reach out between office visits. 

“It gives us just another opportunity to provide patients with risk resources,” Kitay told Fierce Healthcare.

NeuroFlow also facilitates measurement-based care through frequent assessments like the PHQ9 and logs patient data like sleep and mood ratings. But part of the reason some patients under-report on questionnaires is due to survey fatigue, according to Kitay. NLP can be a safety net, an indirect way to identify those at risk even if a questionnaire fails to catch it.

“It’s just another piece of the argument of why I need to get my health system behind us on deploying these kinds of apps,” Kitay said.

It is crucial to engage primary care settings in suicide prevention, Kitay explained. It’s the setting where mental health conditions are most commonly treated at Emory. It is also important to health equity; most patients referred to collaborative care services are minorities with limited insurance coverage, he said. Anyone referred automatically gets access to NeuroFlow.

A third of Emory patients also have treatment-resistant depression, Kitay added, and would need a psychiatric specialist. Engaging with NeuroFlow primes patients to potentially transition to more intensive care. “We’re going to identify those patients earlier and we’re going to get them to that level of care much more expediently,” Kitay said.

Conversely, the app can also help redirect patients who no longer need to be in a higher acuity setting, helping free up valuable provider panels. “In using technology and triage to step patients down, what I’m doing is I’m opening myself up to other patients,” Kitay said.

Emory is now headed into phase 2 of its pilot with NeuroFlow, where it hopes to expand access to the offering to patients beyond the primary care setting.


"We didn’t have enough information"
 

NeuroFlow users that need a human intervention may be handed off to a provider in its clinical partner network, like Array Behavioral Care. As part of its DTC offering, Array has three pathways: one for mild to moderate, the second for higher-acuity and the third for severe cases. Level three translates into more intense care and more touch points.

Array considers answers to patient-reported questionnaires and a patient’s history to understand the severity of their symptoms and determine their pathway. Care coordinators also reach out to patients between appointments. 

“Even if we have all those touch points, the patients—they’re still not going to tell us everything,” Leroy Arenivar, M.D., senior medical director at Array, told Fierce Healthcare. A patient might underreport because they might be afraid of the implications for their treatment. “If we’re only looking at the PHQ9 and GAD7 in the triage phase, we might not catch something that really could be more intense than it appears.”

Occasionally, an Array clinician will discover that a patient should be in a higher pathway, according to Arenivar. That is not always because they underreported their symptoms; it could be, for instance, because their symptoms got worse. “It has surprised me that some of these people that we think are pathway one actually should be pathway three, and we just don't realize it because we didn't have enough information," Arenivar said.

Arenivar is excited by the prospect of AI tools honing an area like triage: “It'll be really interesting to incorporate some of this and be able to be more accurate with assigning pathways.”


Clinical decision support 
 

In 2020, Clarigent Health, a spinout from Cincinnati Children’s, launched an app that uses NLP to analyze speech and identify vocal biomarkers of suicidal risk. The HIPAA-compliant listening tool is designed to provide clinical decision support to mental health professionals. 

That project was built on the work of John Pestian, Ph.D., a neuropsychiatric AI scientist who runs a research lab at Cincinnati Children’s. Pestian has been developing machine learning methods for identifying patients at risk of suicide for more than two decades. Using speech samples from therapy sessions and notes from people who died by suicide, Pestian and colleagues demonstrated in 2020 that AI algorithms could detect suicidality with up to 90% accuracy.

Also at that time, Talkspace was publishing its own research with NYU Grossman School of Medicine researchers. Together, they developed an algorithm trained on anonymized, client-consented therapy transcripts to detect suicide content in patients’ messages to therapists. In the study, the model was found to be nearly 83% accurate—meaning 17% of the time, flagged content turned out not to be at risk. The model was purposely geared to have a higher false positive rate to avoid potentially underreporting. 

The feature was officially rolled out on Talkspace in 2019. The analysis runs in real time on messages sent by patients in their encrypted virtual therapy room and triggers an alert to the therapist, allowing them to respond with the appropriate intervention. The key is having a human in the loop.

“It always has to be looked at by the therapist to understand the context in which the alert is coming,” Talkspace CEO Jon Cohen, M.D., told Fierce Healthcare. “It makes the therapist, we think, better at being a therapist … we don’t tell them what to do.”

Today, about half of Talkspace visits happen via messaging, though it depends on the population, per Cohen. Text sessions are more common among teens, for example. The capabilities of NLP to flag those at risk are particularly important, Cohen said, given the current youth mental health crisis. 

As of late 2023, Talkspace's algorithm had flagged 32,000 members whose messages to their therapists showed signs of suicidality or risk of self-harm. Talkspace clinicians are optimistic about the feature because they understand the context in which it is used, according to Cohen. 

“We’re not telling you what to do—this is like a dashboard warning sign,” he said. 

An internal provider feedback survey found 83% of Talkspace providers find the feature useful, per the company. That the algorithm is running in the background is also disclosed to patients. Cohen indicated Talkspace is planning to eventually roll out the feature to other session formats, such as video, but did not disclose additional details. 


Moderating for self-harm, suicide risk
 

To support long-term behavior change, solutions must be personalized and engaging. That’s what DarioHealth tries to achieve with Twill Care, a free peer-support platform for a range of communities from multiple sclerosis to pregnancy. The app offers tools, information and tips from experts and helps users swap advice.

“If I address the area that matters to you, that’s important to you, then that’s going to drive engagement,” Omar Manejwala, M.D., chief medical officer at DarioHealth, told Fierce Healthcare.

Digital behavioral health solutions typically face the same struggle, Manejwala explained. They temporarily engage consumers, who use their product, then drop off. What can offer personalization and value? A peer-support community, Manejwala argues—and establishing one before capturing users as customers is essential. “You’re really solving the problem of engagement, and in some ways, social media got this right by building large user bases first before seeking to monetize those solutions,” Manejwala said.

But standing in the way of community platforms being offered by players is the need for moderation. “Digital has been missing the boat on this, and one of the reasons is they’re worried about suicidality,” Manejwala said. 

Initially, Twill Care was moderated entirely by humans, but scaling that up is difficult. Today, the app also employs NLP to help analyze posts and comments on the app for risks of self-harm and suicidality. “AI offers an extraordinary tool in that,” Manejwala said. “It's a learning model, so it’s getting better.”

Twill has crisis and escalation protocols in place, and its moderation team is trained to respond with the appropriate interventions. But ultimately, humans alone can’t support large online communities at scale. “Technology can play a role in redefining and revamping how we think about engagement in behavioral health,” Manejwala said. “This is an approach that would not have been possible 15 years ago.”