skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Zhu, Haiyi"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available November 11, 2025
  2. Free, publicly-accessible full text available May 11, 2025
  3. Free, publicly-accessible full text available June 3, 2025
  4. Online mental health support communities, in which volunteer counselors provide accessible mental and emotional health support, have grown in recent years. Despite millions of people using these platforms, the clinical effectiveness of these communities on mental health symptoms remains unknown. Although volunteers receive some training on the therapeutic skills proven effective in face-to-face environments, such as active listening and motivational interviewing, it is unclear how the usage of these skills in an online context affects people's mental health. In our work, we collaborate with one of the largest online peer support platforms and use both natural language processing and machine learning techniques to examine how one-on-one support chats on the platform affect clients' depression and anxiety symptoms. We measure how characteristics of support-providers, such as their experience on the platform and use of therapeutic skills (e.g. affirmation, showing empathy), affect support-seekers' mental health changes. Based on a propensity-score matching analysis to approximate a random-assignment experiment, results shows that online peer support chats improve both depression and anxiety symptoms with a statistically significant but relatively small effect size. Additionally, support providers' techniques such as emphasizing the autonomy of the client lead to better mental health outcomes. However, we also found that the use of some behaviors, such as persuading and providing information, are associated with worsening of mental health symptoms. Our work provides key understanding for mental health care in the online setting and designing training systems for online support providers. 
    more » « less
  5. Much of our modern digital infrastructure relies critically upon open sourced software. The communities responsible for building this cyberinfrastructure require maintenance and moderation, which is often supported by volunteer efforts. Moderation, as a non-technical form of labor, is a necessary but often overlooked task that maintainers undertake to sustain the community around an OSS project. This study examines the various structures and norms that support community moderation, describes the strategies moderators use to mitigate conflicts, and assesses how bots can play a role in assisting these processes. We interviewed 14 practitioners to uncover existing moderation practices and ways that automation can provide assistance. Our main contributions include a characterization of moderated content in OSS projects, moderation techniques, as well as perceptions of and recommendations for improving the automation of moderation tasks. We hope that these findings will inform the implementation of more effective moderation practices in open source communities. 
    more » « less
  6. Large generative AI models (GMs) like GPT and DALL-E are trained to generate content for general, wide-ranging purposes. GM content filters are generalized to filter out content which has a risk of harm in many cases, e.g., hate speech. However, prohibited content is not always harmful -- there are instances where generating prohibited content can be beneficial. So, when GMs filter out content, they preclude beneficial use cases along with harmful ones. Which use cases are precluded reflects the values embedded in GM content filtering. Recent work on red teaming proposes methods to bypass GM content filters to generate harmful content. We coin the term green teaming to describe methods of bypassing GM content filters to design for beneficial use cases. We showcase green teaming by: 1) Using ChatGPT as a virtual patient to simulate a person experiencing suicidal ideation, for suicide support training; 2) Using Codex to intentionally generate buggy solutions to train students on debugging; and 3) Examining an Instagram page using Midjourney to generate images of anti-LGBTQ+ politicians in drag. Finally, we discuss how our use cases demonstrate green teaming as both a practical design method and a mode of critique, which problematizes and subverts current understandings of harms and values in generative AI. 
    more » « less