skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2116751

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Online platforms offer forums with rich, real-world illustrations of moral reasoning. Among these, the r/AmITheAsshole (AITA) subreddit has become a prominent resource for computational research. In AITA, a user (author) describes an interpersonal moral scenario, and other users (commenters) provide moral judgments with reasons for who in the scenario is blameworthy. Prior work has focused on predicting moral judgments from AITA posts and comments. This study introduces the concept of moral sparks—key narrative excerpts that commenters highlight as pivotal to their judgments. Thus, sparks represent heightened moral attention, guiding readers to effective rationales. Through 24,676 posts and 175,988 comments, we demonstrate that research in social psychology on moral judgments extends to real-world scenarios. For example, negative traits (rude) amplify moral attention, whereas sympathetic traits (vulnerable) diminish it. Similarly, linguistic features, such as emotionally charged terms (e.g., anger), heighten moral attention, whereas positive or neutral terms (leisure and bio) attenuate it. Moreover, we find that incorporating moral sparks enhances pretrained language models’ performance on predicting moral judgment, achieving gains in F1 scores of up to 5.5%. These results demonstrate that moral sparks, derived directly from AITA narratives, capture key aspects of moral judgment and perform comparably to prior methods that depend on human annotation or large-scale generative modeling. 
    more » « less
    Free, publicly-accessible full text available September 15, 2026
  2. Argumentative stance classification plays a key role in identifying authors' viewpoints on specific topics. However, generating diverse pairs of argumentative sentences across various domains is challenging. Existing benchmarks often come from a single domain or focus on a limited set of topics. Additionally, manual annotation for accurate labeling is time-consuming and labor-intensive. To address these challenges, we propose leveraging platform rules, readily available expert-curated content, and large language models to bypass the need for human annotation. Our approach produces a multidomain benchmark comprising 4,498 topical claims and 30,961 arguments from three sources, spanning 21 domains. We benchmark the dataset in fully supervised, zero-shot, and few-shot settings, shedding light on the strengths and limitations of different methodologies. 
    more » « less
    Free, publicly-accessible full text available June 7, 2026
  3. Effective human-AI collaboration hinges not only on the AI agent’s ability to follow explicit instructions but also on its capacity to navigate ambiguity, incompleteness, invalidity, and irrelevance in communication. Gricean conversational and inference norms facilitate collaboration by aligning unclear instructions with cooperative principles. We propose a normative framework that integrates Gricean norms and cognitive frameworks—common ground, relevance theory, and theory of mind—into large language model (LLM) based agents. The normative framework adopts the Gricean maxims of quantity, quality, relation, and manner, along with inference, as Gricean norms to interpret unclear instructions, which are: ambiguous, incomplete, invalid, or irrelevant. Within this framework, we introduce Lamoids, GPT-4 powered agents designed to collaborate with humans. To assess the influence of Gricean norms in human- AI collaboration, we evaluate two versions of a Lamoid: one with norms and one without. In our experiments, a Lamoid collaborates with a human to achieve shared goals in a grid world (Doors, Keys, and Gems) by interpreting both clear and unclear natural language instructions. Our results reveal that the Lamoid with Gricean norms achieves higher task accuracy and generates clearer, more accurate, and contextually relevant responses than the Lamoid without norms. This improvement stems from the normative framework, which enhances the agent’s pragmatic reasoning, fostering effective human-AI collaboration and enabling context-aware communication in LLM-based agents. 
    more » « less
    Free, publicly-accessible full text available May 19, 2026
  4. Political news is often slanted toward its publisher’s ideology and seeks to influence readers by focusing on selected aspects of contentious social and political issues. We investigate political slants in news and their influence on readers by analyzing election-related news and reader reactions to the news on Twitter. To this end, we collected election-related news from six major US news publishers who covered the 2020 US presidential elections. We computed each publisher’s political slant based on the favorability of its news toward the two major parties’ presidential candidates. We found that the election-related news coverage shows signs of political slant both in news headlines and on Twitter. The difference in news coverage of the two candidates between the left-leaning (LEFT) and right-leaning (RIGHT) news publishers is statistically significant. The effect size is larger for the news on Twitter than for headlines. And, news on Twitter expresses stronger sentiments than the headlines. We identified moral foundations in reader reactions to the news on Twitter based on Moral Foundation Theory. Moral foundations in readers’ reactions to LEFT and RIGHT differ statistically significantly, though the effects are small. Further, these shifts in moral foundations differ across social and political issues. User engagement on Twitter is higher for RIGHT than for LEFT. We posit that an improved understanding of slant and influence can enable better ways to combat online political polarization. 
    more » « less
  5. Moral reasoning reflects how people acquire and apply moral rules in particular situations. With social interactions increasingly happening online, social media provides an unprecedented opportunity to assess in-the-wild moral reasoning. We investigate the commonsense aspects of morality empirically using data from a Reddit subcommunity (i.e., a subreddit), r/AmITheAsshole, where an author describes their behavior in a situation and seeks comments about whether that behavior was appropriate. A commenter judges and provides reasons for whether an author or others’ behaviors were wrong. We focus on the novel problem of understanding the moral reasoning implicit in user comments about the propriety of an author’s behavior. Specifically, we explore associations between the common elements of the indicated rationale and the extractable social factors. Our results suggest that a moral response depends on the author’s gender and the topic of a post. Typical situations and behaviors include expressing anger emotion and using sensible words (e.g., f-ck, hell, and damn) in work-related situations. Moreover, we find that commonly expressed reasons also depend on commenters’ interests. 
    more » « less
  6. A multiagent system is a society of autonomous agents whose interactions can be regulated via social norms. In general, the norms of a society are not hardcoded but emerge from the agents’ interactions. Specifically, how the agents in a society react to each other’s behavior and respond to the reactions of others determines which norms emerge in the society. We think of these reactions by an agent to the satisfactory or unsatisfactory behaviors of another agent as communications from the first agent to the second agent. Understanding these communications is a kind of social intelligence: these communications provide natural drivers for norm emergence by pushing agents toward certain behaviors, which can become established as norms. Whereas it is well-known that sanctioning can lead to the emergence of norms, we posit that a broader kind of social intelligence can prove more effective in promoting cooperation in a multiagent system. Accordingly, we develop Nest, a framework that models social intelligence via a wider variety of communications and understanding of them than in previous work. To evaluate Nest, we develop a simulated pandemic environment and conduct simulation experiments to compare Nest with baselines considering a combination of three kinds of social communication: sanction, tell, and hint. We find that societies formed of Nest agents achieve norms faster. Moreover, Nest agents effectively avoid undesirable consequences, which are negative sanctions and deviation from goals, and yield higher satisfaction for themselves than baseline agents despite requiring only an equivalent amount of information. 
    more » « less
  7. Everyone acknowledges the importance of responsible computing but practical advice is hard to come by. Important Internet applications are ways to accomplish business processes. We investigate how they can be geared to support responsibility as illustrated via sustainability. Sustainability is not only urgent and essential but also challenging due to engagement with human and societal concerns, diverse success criteria, and extended temporal and spatial scope. This article introduces a new framework for developing responsible Internet applications that synthesizes the perspectives of Theory of Change, Participatory System Mapping, and Computational Sociotechnical Systems. 
    more » « less
  8. The power of norms in both human societies and sociotechnical systems arises from the facts that (1) societal norms, including laws and policies, characterize acceptable behavior in high-level terms and (2) they are not hard controls and can be deviated from. Thus, the design of responsibly autonomous agents faces an essential tension: these agents must both (1) respect applicable norms and (2) deviate from those norms when blindly following them may lead to diminished outcomes.We propose a conceptual foundation for norm deviation. As a guiding framework, we adopt Habermas's theory of communicative action comprising objective, subjective, and practical validity claims regarding the suitability of deviation.Our analysis thus goes beyond previous studies of norm deviation and yields reasoning guidelines uniting norms and values by which to develop responsible agents. 
    more » « less
  9. Conversations among online users sometimes derail, i.e., break down into personal attacks. Derailment interferes with the healthy growth of communities in cyberspace. The ability to predict whether an ongoing conversation will derail could provide valuable advance, even real-time, insight to both interlocutors and moderators. Prior approaches predict conversation derailment retrospectively without the ability to forestall the derailment proactively. Some existing works attempt to make dynamic predictions as the conversation develops, but fail to incorporate multisource information, such as conversational structure and distance to derailment. We propose a hierarchical transformer-based framework that combines utterance-level and conversation-level information to capture fine-grained contextual semantics. We propose a domain-adaptive pretraining objective to unite conversational structure information and a multitask learning scheme to leverage the distance from each utterance to derailment. An evaluation of our framework on two conversation derailment datasets shows an improvement in F1 score for the prediction of derailment. These results demonstrate the effectiveness of incorporating multisource information for predicting the derailment of a conversation. 
    more » « less