AI models to analyze language in social media posts fail to detect depression in Black people

Google+ Pinterest LinkedIn Tumblr +

By Stacy M. Brown,
NNPA Newswire

In a study published in the Proceedings of the National Academy of Sciences (PNAS), researchers from the University of Pennsylvania have uncovered significant disparities in the effectiveness of language-based models for detecting depression on social media, mainly when applied to Black individuals. The study authors called for more inclusive mental health assessment and treatment approaches. 

The study found that depression, which is a common mental health condition, often leaves clear linguistic traces. For example, first-person pronouns (“I”) and certain groups of words that show negative emotions can be signs of depression among social media users, according to previous research. However, the recent analysis of Facebook posts from over 800 individuals, including equal numbers of Black and White participants, revealed that these predictive qualities were primarily applicable to White individuals.

“We need to have the understanding that, when thinking about mental health and devising interventions for treatment, we should account for the differences among racial groups and how they may talk about depression. We cannot put everyone in the same bucket,” said Dr. Sharath Chandra Guntuku, a senior author of the study, who expressed surprise over the findings.

The research uncovered that language-based models trained to detect depression performed significantly less accurately when applied to posts by Black individuals on social media. Even after teaching the models on language specifically used by Black individuals, their predictive ability remained poor.

“Why? There could be multiple reasons,” said the study’s lead author, Sunny Rai, Ph.D., a computer and information science postdoctoral researcher. “It could be the case that we need more data to learn depression patterns in Black individuals compared to White individuals. It could also be the case that Black individuals do not exhibit markers of depression on social media platforms due to perceived stigma.”

Rai said there’s a need for increased representation of Black individuals and other racial and ethnic groups in research to better understand how depression is expressed across diverse populations. The goal is to develop more accurate predictive models and improve mental health interventions tailored to different communities.

Moreover, the study revealed that specific linguistic markers previously associated with depression, such as first-person pronoun usage and expressions of negative emotions, were not indicative of depression among Black individuals. Researchers argued that this fact highlights the complexity of mental health expression across racial lines and underscores the importance of culturally sensitive approaches in mental health research and practice.

“AI-guided models that were developed using social media data can help in monitoring the prevalence of mental health disorders, especially depression, and their manifestations,” Rai added. “Such computational models hold promise in assisting policymaking as well as designing AI assistants that can provide affordable yet personalized healthcare options to citizens.”

The researchers noted that “insights made through AI can also serve the education of professionals who help people manage depression.”

“Understanding differences in how Black and White people with depression talk about themselves and their condition will be important when training psychotherapists who work across different communities,” said Lyle Ungar, Ph.D., a co-author of the study and professor of computer and information science.

This article was originally published by NNPA Newswire.

Source link

Share.

About Author