Clinical group bereavement therapy often promotes narrative sharing as a therapeutic intervention to facilitate grief processing. Increasingly, people turn to social media to express stories of loss and seek support surrounding bereavement experiences, specifically, the loss of loved ones from suicide. This paper reports the results of a computational linguistic analysis of narrative expression within an online suicide bereavement support community. We identify distinctive characteristics of narrative posts (compared to non-narrative posts) in linguistic style. We then develop and validate a machine-learning model for tagging narrative posts at scale and demonstrate the utility of applying this machine-learning model to a more general grief support community. Through comparison, we validate our model's narrative tagging accuracy and compare the proportion of narrative posts between the two communities we have analyzed. Narrative posts make up about half of all total posts in these two grief communities, demonstrating the importance of narrative posts to grief support online. Finally, we consider how the narrative tagging tool presented in this study can be applied to platform design to more effectively support people expressing the narrative sharing of grief in online grief support spaces.
more »
« less
"I hate you. I love you. I'm sorry. I miss you." Understanding Online Grief Expression Through Suicide Bereavement Letter-Writing Practices
When bereaved individuals seek online support in response to the suicide of a loved one, their expressions of grief take many forms. Although the intense grief expressions individuals bereaved by suicide commonly share in private therapeutic settings can be helpful in healing from traumatic loss, these same expressions may potentially cause harm to others when shared in a public online support community. In this study, we present a qualitative analysis of letters posted on the r/SuicideBereavement subreddit, and comments replying to those posts, to explore what diverse expressions of grief additionally demand of platform design. We find that letter posts contain potentially harmful grief expressions that, in this community, generate mutual support among community members. Informed by our findings, this study considers the design challenges for online platforms as they simultaneously support users receiving support and healing through sharing certain grief expressions, while also supporting users who will be harmed by exposure to those grief expressions. Taking inspiration from offline therapy modalities, we consider the design implications of creating specialized online grief support spaces for diverse grief expressions.
more »
« less
- Award ID(s):
- 2048244
- PAR ID:
- 10528215
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- Proceedings of the ACM on Human-Computer Interaction
- Volume:
- 8
- Issue:
- CSCW1
- ISSN:
- 2573-0142
- Page Range / eLocation ID:
- 1 to 27
- Subject(s) / Keyword(s):
- online networks social media bereavement suicide
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We licensed a dataset from a mental health peer support platform catering mainly to teens and young adults. We anonymized the name of this platform to protect the individuals on our dataset. On this platform, users can post content and comment on others’ posts. Interactions are semi-anonymous: users share a photo and screen name with others. They have the option to post with their username visible or anonymously. The platform is moderated, but the ratio of moderators to posters is low (0.00007). The original dataset included over 5 million posts and 15 million comments from 2011- 2017. It was scaled to a feasible size for qualitative analysis by running a query to identify posts by a) adolescents aged 13-17 that were seeking support for b) online sexual experiences (not offline) with people they know (not strangers).more » « less
-
BackgroundIn 2023, the United States experienced its highest- recorded number of suicides, exceeding 50,000 deaths. In the realm of psychiatric disorders, major depressive disorder stands out as the most common issue, affecting 15% to 17% of the population and carrying a notable suicide risk of approximately 15%. However, not everyone with depression has suicidal thoughts. While “suicidal depression” is not a clinical diagnosis, it may be observed in daily life, emphasizing the need for awareness. ObjectiveThis study aims to examine the dynamics, emotional tones, and topics discussed in posts within the r/Depression subreddit, with a specific focus on users who had also engaged in the r/SuicideWatch community. The objective was to use natural language processing techniques and models to better understand the complexities of depression among users with potential suicide ideation, with the goal of improving intervention and prevention strategies for suicide. MethodsArchived posts were extracted from the r/Depression and r/SuicideWatch Reddit communities in English spanning from 2019 to 2022, resulting in a final data set of over 150,000 posts contributed by approximately 25,000 unique overlapping users. A broad and comprehensive mix of methods was conducted on these posts, including trend and survival analysis, to explore the dynamic of users in the 2 subreddits. The BERT family of models extracted features from data for sentiment and thematic analysis. ResultsOn August 16, 2020, the post count in r/SuicideWatch surpassed that of r/Depression. The transition from r/Depression to r/SuicideWatch in 2020 was the shortest, lasting only 26 days. Sadness emerged as the most prevalent emotion among overlapping users in the r/Depression community. In addition, physical activity changes, negative self-view, and suicidal thoughts were identified as the most common depression symptoms, all showing strong positive correlations with the emotion tone of disappointment. Furthermore, the topic “struggles with depression and motivation in school and work” (12%) emerged as the most discussed topic aside from suicidal thoughts, categorizing users based on their inclination toward suicide ideation. ConclusionsOur study underscores the effectiveness of using natural language processing techniques to explore language markers and patterns associated with mental health challenges in online communities like r/Depression and r/SuicideWatch. These insights offer novel perspectives distinct from previous research. In the future, there will be potential for further refinement and optimization of machine classifications using these techniques, which could lead to more effective intervention and prevention strategies.more » « less
-
Abstract Misinformation about the COVID-19 pandemic proliferated widely on social media platforms during the course of the health crisis. Experts have speculated that consuming misinformation online can potentially worsen the mental health of individuals, by causing heightened anxiety, stress, and even suicidal ideation. The present study aims to quantify the causal relationship between sharing misinformation, a strong indicator of consuming misinformation, and experiencing exacerbated anxiety. We conduct a large-scale observational study spanning over 80 million Twitter posts made by 76,985 Twitter users during an 18.5 month period. The results from this study demonstrate that users who shared COVID-19 misinformation experienced approximately two times additional increase in anxiety when compared to similar users who did not share misinformation. Socio-demographic analysis reveals that women, racial minorities, and individuals with lower levels of education in the United States experienced a disproportionately higher increase in anxiety when compared to the other users. These findings shed light on the mental health costs of consuming online misinformation. The work bears practical implications for social media platforms in curbing the adverse psychological impacts of misinformation, while also upholding the ethos of an online public sphere.more » « less
-
null (Ed.)Cyberbullying is a prevalent concern within social computing research that has led to the development of several supervised machine learning (ML) algorithms for automated risk detection. A critical aspect of ML algorithm development is how to establish ground truth that is representative of the phenomenon of interest in the real world. Often, ground truth is determined by third-party annotators (i.e., “outsiders”) who are removed from the situational context of the interaction; therefore, they cannot fully understand the perspective of the individuals involved (i.e., “insiders”). To understand the extent of this problem, we compare “outsider” versus “insider” perspectives when annotating 2,000 posts from an online peer-support platform. We interpolate this analysis to a corpus containing over 2.3 million posts on bullying and related topics, and reveal significant gaps in ML models that use third-party annotators to detect bullying incidents. Our results indicate that models based on the insiders’ perspectives yield a significantly higher recall in identifying bullying posts and are able to capture a range of explicit and implicit references and linguistic framings, including person-specific impressions of the incidents. Our study highlights the importance of incorporating the victim’s point of view in establishing effective tools for cyberbullying risk detection. As such, we advocate for the adoption of human-centered and value-sensitive approaches for algorithm development that bridge insider-outsider perspective gaps in a way that empowers the most vulnerable.more » « less
An official website of the United States government

