skip to main content


Search for: All records

Award ID contains: 1928627

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Algorithmic rankers are ubiquitously applied in automated decision systems such as hiring, admission, and loan-approval systems. Without appropriate explanations, decision-makers often cannot audit or trust algorithmic rankers' outcomes. In recent years, XAI (explainable AI) methods have focused on classification models, but for algorithmic rankers, we are yet to develop state-of-the-art explanation methods. Moreover, explanations are also sensitive to changes in data and ranker properties, and decision-makers need transparent model diagnostics for calibrating the degree and impact of ranker sensitivity. To fulfill these needs, we take a dual approach of: i) designing explanations by transforming Shapley values for the simple form of a ranker based on linear weighted summation and ii) designing a human-in-the-loop sensitivity analysis workflow by simulating data whose attributes follow user-specified statistical distributions and correlations. We leverage a visualization interface to validate the transformed Shapley values and draw inferences from them by leveraging multi-factorial simulations, including data distributions, ranker parameters, and rank ranges. 
    more » « less
    Free, publicly-accessible full text available June 18, 2024
  2. Content moderation is a crucial aspect of online platforms, and it requires human moderators (mods) to repeatedly review and remove harmful content. However, this moderation process can lead to cognitive overload and emotional labor for the mods. As new platforms and designs emerge, such as live streaming space, new challenges arise due to the real-time nature of the interactions. In this study, we examined the use of ignoring as a moderation strategy by interviewing 19 Twitch mods. Our findings indicated that ignoring involves complex cognitive processes and significant invisible labor in the decision-making process. Additionally, we found that ignoring is an essential component of real-time moderation. These preliminary findings suggest that ignoring has the potential to be a valuable moderation strategy in future interactive systems, which highlights the need to design better support for ignoring in interactive live-streaming systems. 
    more » « less
  3. As each micro community centered around the streamer attempts to set its own guidelines in live streaming communities, it is common for volunteer moderators (mods) and the streamer to disagree on how to handle various situations. In this study, we conducted an online survey (N=240) with live streaming mods to explore their commitment to the streamer to grow the micro community and the different styles in which they handle conflicts with the streamer. We found that 1) mods apply more active and cooperative styles than passive and assertive styles to manage conflicts, but they might be forced to do so, and 2) mods with strong commitments to the streamer would like to apply styles showing either high concerns for the streamer or low concerns for themselves. We reflect on how these results can affect micro community development and recommend designs to mitigate conflict and strengthen commitment. 
    more » « less
  4. When people have the freedom to create and post content on the internet, particularly anonymously, they do not always respect the rules and regulations of the websites on which they post, leaving other unsuspecting users vulnerable to sexism, racism, threats, and other unacceptable content in their daily cyberspace diet. However, content moderators witness the worst of humanity on a daily basis in place of the average netizen. This takes its toll on moderators, causing stress, fatigue, and emotional distress akin to the symptomology of post-traumatic stress disorder (PTSD). The goal of the present study was to explore whether adding positive stimuli to breaktimes-images of baby animals or beautiful, aweinspiring landscapes-could help reduce the negative side-effects of being a content moderator. To test this, we had over 300 experienced content moderators read and decide whether 200 fake text-based social media posts were acceptable or not for public consumption. Although we set out to test positive emotional stimulation, however, we actually found that it is the cumulative nature of the negative emotions that likely negates most of the effects of the intervention: the longer the person had practiced content moderation, the stronger their negative experience. Connections to compassion fatigue and how best to spend work breaks as a content moderator are discussed. 
    more » « less
  5. Much scholarship across the humanities and social sciences seek to shed light on the intersection of far-right politics and social media platforms. Yet, scholars tend to focus on racist actors and the ideological underpinnings of platform policies while the contingencies that shape the experiences of content reviewers who make decisions about racist content remain underexamined. This article fills this gap by exploring such contingencies from a linguistic anthropological perspective. Drawing on Facebook moderators’ stories, I illustrate the factors adjacent to, and beyond, ideology that animate the adjudication of racist hate speech.

     
    more » « less
  6. Abbate, Janet ; Dick, Stephanie (Ed.)
    Book chapter 
    more » « less
  7. Volunteer moderators (mods) play significant roles in developing moderation standards and dealing with harmful content in their micro-communities. However, little work explores how volunteer mods work as a team. In line with prior work about understanding volunteer moderation, we interview 40 volunteer mods on Twitch — a leading live streaming platform. We identify how mods collaborate on tasks (off-streaming coordination and preparation, in-stream real-time collaboration, and relationship building both off-stream and in-stream to reinforce collaboration) and how mods contribute to moderation standards (collaboratively working on the community rulebook and individually shaping community norms). We uncover how volunteer mods work as an effective team. We also discuss how the affordances of multi-modal communication and informality of volunteer moderation contribute to task collaboration, standards development, and mod’s roles and responsibilities. 
    more » « less
  8. Content moderation is an essential part of online community health and governance. While much of extant research is centered on what happens to the content, moderation also involves the management of violators. This study focuses on how moderators (mods) make decisions about their actions after the violation takes place but before the sanction by examining how they "profile" the violators. Through observations and interviews with volunteer mods on Twitch, we found that mods engage in a complex process of collaborative evidence collection and profile violators into different categories to decide the type and extent of punishment. Mods consider violators' characteristics as well as behavioral history and violation context before taking moderation action. The main purpose of the profiling was to avoid excessive punishment and aim to integrate violators more into the community. We discuss the contributions of profiling to moderation practice and suggest design mechanisms to facilitate mods' profiling processes. 
    more » « less
  9. Modern social media platforms like Twitch, YouTube, etc., embody an open space for content creation and consumption. However, an unintended consequence of such content democratization is the proliferation of toxicity and abuse that content creators get subjected to. Commercial and volunteer content moderators play an indispensable role in identifying bad actors and minimizing the scale and degree of harmful content. Moderation tasks are often laborious, complex, and even if semi-automated, they involve high-consequence human decisions that affect the safety and popular perception of the platforms. In this paper, through an interdisciplinary collaboration among researchers from social science, human-computer interaction, and visualization, we present a systematic understanding of how visual analytics can help in human-in-the-loop content moderation. We contribute a characterization of the data-driven problems and needs for proactive moderation and present a mapping between the needs and visual analytic tasks through a task abstraction framework. We discuss how the task abstraction framework can be used for transparent moderation, design interventions for moderators’ well-being, and ultimately, for creating futuristic human-machine interfaces for data-driven content moderation. 
    more » « less
  10. null (Ed.)