skip to main content


This content will become publicly available on June 18, 2025

Title: Learning to Persuade on the Fly: Robustness Against Ignorance

How Can Platforms Learn to Make Persuasive Recommendations?

Many online platforms make recommendations to users on content from creators or products from sellers. The motivation underlying such recommendations is to persuade users into taking actions that also serve system-wide goals. To do this effectively, a platform needs to know the underlying distribution of payoff-relevant variables (such as content or product quality). However, this distributional information is often lacking—for example, when new content creators or sellers join a platform. In “Learning to Persuade on the Fly: Robustness Against Ignorance,” Zu, Iyer, and Xu study how a platform can make persuasive recommendations over time in the absence of distributional knowledge using a learning-based approach. They first propose and motivate a robust-persuasiveness criterion for settings with incomplete information. They then design an efficient recommendation algorithm that satisfies this criterion and achieves low regret compared with the benchmark of complete distributional knowledge. Overall, by relaxing the strong assumption of complete distributional knowledge, this research extends the applicability of information design to more practical settings.

 
more » « less
Award ID(s):
2303372
PAR ID:
10528612
Author(s) / Creator(s):
; ;
Publisher / Repository:
Institute for Operations Research and the Management Sciences
Date Published:
Journal Name:
Operations Research
ISSN:
0030-364X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Mainstream platforms’ content moderation systems typically employ generalized “one-size-fits-all” approaches, intended to serve both general and marginalized users. Thus, transgender people must often create their own technologies and moderation systems to meet their specific needs. In our interview study of transgender technology creators (n=115), we found that creators face issues of transphobic abuse and disproportionate content moderation. Trans tech creators address these issues by carefully moderating and vetting their userbases, centering trans contexts in content moderation systems, and employing collective governance and community models. Based on these findings, we argue that trans tech creators’ approaches to moderation offer important insights into how to better design for trans users, and ultimately, marginalized users in the larger platform ecology. We introduce the concept of trans-centered moderation – content moderation that reviews and successfully vets transphobic users, appoints trans moderators to effectively moderate trans contexts, considers the limitations and constraints of technology for addressing social challenges, and employs collective governance and community models. Trans-centered moderation can help to improve platform design for trans users while reducing the harm faced by trans people and marginalized users more broadly. 
    more » « less
  2. Transparency matters a lot to people who experience moderation on online platforms; much CSCW research has viewed offering explanations as one of the primary solutions to enhance moderation transparency. However, relatively little attention has been paid to unpacking what transparency entails in moderation design, especially for content creators. We interviewed 28 YouTubers to understand their moderation experiences and analyze the dimensions of moderation transparency. We identified four primary dimensions: participants desired the moderation system to present moderation decisions saliently, explain the decisions profoundly, afford communication with the users effectively, and offer repairment and learning opportunities. We discuss how these four dimensions are mutually constitutive and conditioned in the context of creator moderation, where the target of governance mechanisms extends beyond the content to creator careers. We then elaborate on how a dynamic, transparency perspective could value content creators' digital labor, how transparency design could support creators' learning, as well as implications for transparency design of other creator platforms. 
    more » « less
  3. Social media platforms make trade-offs in their design and policy decisions to attract users and stand out from other platforms. These decisions are influenced by a number of considerations, e.g. what kinds of content moderation to deploy or what kinds of resources a platform has access to. Their choices play into broader political tensions; social media platforms are situated within a social context that frames their impact, and they can have politics through their design that enforce power structures and serve existing authorities. We turn to Pillowfort, a small social media platform, to examine these political tensions as a case study. Using a discourse analysis, we examine public discussion posts between staff and users as they negotiate the site's development over a period of two years. Our findings illustrate the tensions in navigating the politics that users bring with them from previous platforms, the difficulty of building a site's unique identity and encouraging commitment, and examples of how design decisions can both foster and break trust with users. Drawing from these findings, we discuss how the success and failure of new social media platforms are impacted by political influences on design and policy decisions. 
    more » « less
  4. Well-intentioned users sometimes enable the spread of misinformation due to limited context about where the information originated and/or why it is spreading. Building upon recommendations based on prior research about tackling misinformation, we explore the potential to support media literacy through platform design. We develop and design an intervention consisting of a tweet trajectory-to illustrate how information reached a user-and contextual cues-to make credibility judgments about accounts that amplify, manufacture, produce, or situate in the vicinity of problematic content (AMPS). Using a research through design approach, we demonstrate how the proposed intervention can help discern credible actors, challenge blind faith amongst online friends, evaluate the cost of associating with online actors, and expose hidden agendas. Such facilitation of credibility assessment can encourage more responsible sharing of content. Through our findings, we argue for using trajectory-based designs to support informed information sharing, advocate for feature updates that nudge users with reflective cues, and promote platform-driven media literacy. 
    more » « less
  5. Algorithmic systems help manage the governance of digital platforms featuring user-generated content, including how money is distributed to creators from the profits a platform earns from advertising on this content. However, creators producing content about disadvantaged populations have reported that these kinds of systems are biased, having associated their content with prohibited or unsafe content, leading to what creators believed were error-prone decisions to demonetize their videos. Motivated by these reports, we present the results of 20 interviews with YouTube creators and a content analysis of videos, tweets, and news about demonetization cases to understand YouTubers' perceptions of demonetization affecting videos featuring disadvantaged or vulnerable populations, as well as creator responses to demonetization, and what kinds of tools and infrastructure support they desired. We found creators had concerns about YouTube's algorithmic system stereotyping content featuring vulnerable demographics in harmful ways, for example by labeling it "unsafe'' for children or families -- creators believed these demonetization errors led to a range of economic, social, and personal harms. To provide more context to these findings, we analyzed and report on the technique a few creators used to audit YouTube's algorithms to learn what could cause the demonetization of videos featuring LGBTQ people, culture and/or social issues. In response to the varying beliefs about the causes and harms of demonetization errors, we found our interviewees wanted more reliable information and statistics about demonetization cases and errors, more control over their content and advertising, and better economic security. 
    more » « less