skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: How social learning amplifies moral outrage expression in online social networks
Moral outrage shapes fundamental aspects of human social life and is now widespread in online social networks. Here, we show how social learning processes amplify online moral outrage expressions over time. In two pre-registered observational studies of Twitter (7,331 users and 12.7 million total tweets) and two pre-registered behavioral experiments (N = 240), we find that positive social feedback for outrage expressions increases the likelihood of future outrage expressions, consistent with principles of reinforcement learning. We also find that outrage expressions are sensitive to expressive norms in users’ social networks, over and above users’ own preferences, suggesting that norm learning processes guide online outrage expressions. Moreover, expressive norms moderate social reinforcement of outrage: in ideologically extreme networks, where outrage expression is more common, users are less sensitive to social feedback when deciding whether to express outrage. Our findings highlight how platform design interacts with human learning mechanisms to impact moral discourse in digital public spaces.  more » « less
Award ID(s):
1808868
PAR ID:
10256839
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Science advances
ISSN:
2375-2548
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    With more than 3 billion users, online social networks represent an important venue for moral and political discourse and have been used to organize political revolutions, influence elections, and raise awareness of social issues. These examples rely on a common process to be effective: the ability to engage users and spread moralized content through online networks. Here, we review evidence that expressions of moral emotion play an important role in the spread of moralized content (a phenomenon we call moral contagion). Next, we propose a psychological model called the motivation, attention, and design (MAD) model to explain moral contagion. The MAD model posits that people have group-identity-based motivations to share moral-emotional content, that such content is especially likely to capture our attention, and that the design of social-media platforms amplifies our natural motivational and cognitive tendencies to spread such content. We review each component of the model (as well as interactions between components) and raise several novel, testable hypotheses that can spark progress on the scientific investigation of civic engagement and activism, political polarization, propaganda and disinformation, and other moralized behaviors in the digital age. 
    more » « less
  2. A prerequisite for social coordination is bidirectional communication between teammates, each playing two roles simultaneously: as receptive listeners and expressive speakers. For robots working with humans in complex situations with multiple goals that differ in importance, failure to fulfill the expectation of either role could undermine group performance due to misalignment of values between humans and robots. Specifically, a robot needs to serve as an effective listener to infer human users’ intents from instructions and feedback and as an expressive speaker to explain its decision processes to users. Here, we investigate how to foster effective bidirectional human-robot communications in the context of value alignment—collaborative robots and users form an aligned understanding of the importance of possible task goals. We propose an explainable artificial intelligence (XAI) system in which a group of robots predicts users’ values by taking in situ feedback into consideration while communicating their decision processes to users through explanations. To learn from human feedback, our XAI system integrates a cooperative communication model for inferring human values associated with multiple desirable goals. To be interpretable to humans, the system simulates human mental dynamics and predicts optimal explanations using graphical models. We conducted psychological experiments to examine the core components of the proposed computational framework. Our results show that real-time human-robot mutual understanding in complex cooperative tasks is achievable with a learning model based on bidirectional communication. We believe that this interaction framework can shed light on bidirectional value alignment in communicative XAI systems and, more broadly, in future human-machine teaming systems. 
    more » « less
  3. Online platforms offer forums with rich, real-world illustrations of moral reasoning. Among these, the r/AmITheAsshole (AITA) subreddit has become a prominent resource for computational research. In AITA, a user (author) describes an interpersonal moral scenario, and other users (commenters) provide moral judgments with reasons for who in the scenario is blameworthy. Prior work has focused on predicting moral judgments from AITA posts and comments. This study introduces the concept of moral sparks—key narrative excerpts that commenters highlight as pivotal to their judgments. Thus, sparks represent heightened moral attention, guiding readers to effective rationales. Through 24,676 posts and 175,988 comments, we demonstrate that research in social psychology on moral judgments extends to real-world scenarios. For example, negative traits (rude) amplify moral attention, whereas sympathetic traits (vulnerable) diminish it. Similarly, linguistic features, such as emotionally charged terms (e.g., anger), heighten moral attention, whereas positive or neutral terms (leisure and bio) attenuate it. Moreover, we find that incorporating moral sparks enhances pretrained language models’ performance on predicting moral judgment, achieving gains in F1 scores of up to 5.5%. These results demonstrate that moral sparks, derived directly from AITA narratives, capture key aspects of moral judgment and perform comparably to prior methods that depend on human annotation or large-scale generative modeling. 
    more » « less
  4. Social media companies wield power over their users through design, policy, and through their participation in public discourse. We set out to understand how companies leverage public relations to influence expectations of privacy and privacy-related norms. To interrogate the discourse productions of companies in relation to privacy, we examine the blogs associated with three major social media platforms: Facebook, Instagram (both owned by Facebook Inc.), and Snapchat. We analyze privacy-related posts using critical discourse analysis to demonstrate how these powerful entities construct narratives about users and their privacy expectations. We find that each of these platforms often make use of discourse about "vulnerable" identities to invoke relations of power, while at the same time, advancing interpretations and values that favor data capitalism. Finally, we discuss how these public narratives might influence the construction of users' own interpretations of appropriate privacy norms and conceptions of self. We contend that expectations of privacy and social norms are not simply artifacts of users' own needs and desires, but co-constructions that reflect the influence of social media companies themselves. 
    more » « less
  5. Human behavior is frequently guided by social and moral norms, and no human community can exist without norms. Robots that enter human societies must therefore behave in norm-conforming ways as well. However, currently there is no solid cognitive or computational model available of how human norms are represented, activated, and learned. We provide a conceptual and psychological analysis of key properties of human norms and identify the demands these properties put on any artificial agent that incorporates norms—demands on the format of norm representations, their structured organization, and their learning algorithms. 
    more » « less