Online platforms offer forums with rich, real-world illustrations of moral reasoning. Among these, the r/AmITheAsshole (AITA) subreddit has become a prominent resource for computational research. In AITA, a user (author) describes an interpersonal moral scenario, and other users (commenters) provide moral judgments with reasons for who in the scenario is blameworthy. Prior work has focused on predicting moral judgments from AITA posts and comments. This study introduces the concept of moral sparks—key narrative excerpts that commenters highlight as pivotal to their judgments. Thus, sparks represent heightened moral attention, guiding readers to effective rationales. Through 24,676 posts and 175,988 comments, we demonstrate that research in social psychology on moral judgments extends to real-world scenarios. For example, negative traits (rude) amplify moral attention, whereas sympathetic traits (vulnerable) diminish it. Similarly, linguistic features, such as emotionally charged terms (e.g., anger), heighten moral attention, whereas positive or neutral terms (leisure and bio) attenuate it. Moreover, we find that incorporating moral sparks enhances pretrained language models’ performance on predicting moral judgment, achieving gains in F1 scores of up to 5.5%. These results demonstrate that moral sparks, derived directly from AITA narratives, capture key aspects of moral judgment and perform comparably to prior methods that depend on human annotation or large-scale generative modeling.
more »
« less
Morality, Risk-Taking and Psychopathic Tendencies: An Empirical Study
Research in empirical moral psychology has consistently found negative correlations between morality and both risk-taking, as well as psychopathic tendencies. However, prior research did not sufficiently explore intervening or moderating factors. Additionally, prior measures of moral preference (e.g., sacrificial dilemmas) have a pronounced lack of ecological validity. This study seeks to address these two gaps in the literature. First, this study used Preference for Precepts Implied in Moral Theories (PPIMT), which offers a novel, more nuanced and ecologically valid measure of moral judgment. Second, the current study examined if risk taking moderates the relationships between psychopathic tendencies and moral judgment. Results indicated that models which incorporated risk-taking as a moderator between psychopathic tendencies and moral judgment were a better fit to the data than those that incorporated psychopathic tendencies and risk-taking as exogenous variables, suggesting that the association between psychopathic tendencies and moral judgment is influenced by level of risk-taking. Therefore, future research investigating linkages between psychopathic tendencies and moral precepts may do well to incorporate risk-taking and risky behaviors to further strengthen the understanding of moral judgment in these individuals.
more »
« less
- Award ID(s):
- 2043612
- PAR ID:
- 10391628
- Date Published:
- Journal Name:
- Frontiers in Psychology
- Volume:
- 13
- ISSN:
- 1664-1078
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Computational preference elicitation methods are tools used to learn people’s preferences quantitatively in a given context. Recent works on preference elicitation advocate for active learning as an efficient method to iteratively construct queries (framed as comparisons between context-specific cases) that are likely to be most informative about an agent’s underlying preferences. In this work, we argue that the use of active learning for moral preference elicitation relies on certain assumptions about the underlying moral preferences, which can be violated in practice. Specifically, we highlight the following common assumptions (a) preferences are stable over time and not sensitive to the sequence of presented queries, (b) the appropriate hypothesis class is chosen to model moral preferences, and (c) noise in the agent’s responses is limited. While these assumptions can be appropriate for preference elicitation in certain domains, prior research on moral psychology suggests they may not be valid for moral judgments. Through a synthetic simulation of preferences that violate the above assumptions, we observe that active learning can have similar or worse performance than a basic random query selection method in certain settings. Yet, simulation results also demonstrate that active learning can still be viable if the degree of instability or noise is relatively small and when the agent’s preferences can be approximately represented with the hypothesis class used for learning. Our study highlights the nuances associated with effective moral preference elicitation in practice and advocates for the cautious use of active learning as a methodology to learn moral preferences.more » « less
-
Abstract Constraining the actions of AI systems is one promising way to ensure that these systems behave in a way that is morally acceptable to humans. But constraints alone come with drawbacks as in many AI systems, they are not flexible. If these constraints are too rigid, they can preclude actions that are actually acceptable in certain, contextual situations. Humans, on the other hand, can often decide when a simple and seemingly inflexible rule should actually be overridden based on the context. In this paper, we empirically investigate the way humans make these contextual moral judgements, with the goal of building AI systems that understand when to follow and when to override constraints. We propose a novel and general preference-based graphical model that captures a modification of standarddual processtheories of moral judgment. We then detail the design, implementation, and results of a study of human participants who judge whether it is acceptable to break a well-established rule:no cutting in line. We then develop an instance of our model and compare its performance to that of standard machine learning approaches on the task of predicting the behavior of human participants in the study, showing that our preference-based approach more accurately captures the judgments of human decision-makers. It also provides a flexible method to model the relationship between variables for moral decision-making tasks that can be generalized to other settings.more » « less
-
Introduction Moral judgment is of critical importance in the work context because of its implicit or explicit omnipresence in a wide range of work-place practices. The moral aspects of actual behaviors, intentions, and consequences represent areas of deep preoccupation, as exemplified in current corporate social responsibility programs, yet there remain ongoing debates on the best understanding of how such aspects of morality (behaviors, intentions, and consequences) interact. The ADC Model of moral judgment integrates the theoretical insights of three major moral theories (virtue ethics, deontology, and consequentialism) into a single model, which explains how moral judgment occurs in parallel evaluation processes of three different components: the character of a person (Agent-component); their actions (Deed-component); and the consequences brought about in the situation (Consequences-component). The model offers the possibility of overcoming difficulties encountered by single or dual-component theories. Methods We designed a 2 × 2 × 2-between-subjects design vignette experiment with a Germany-wide sample of employed respondents ( N = 1,349) to test this model. Results Results showed that the Deed-component affects willingness to cooperate in the work context, which is mediated via moral judgments. These effects also varied depending on the levels of the Agent- and Consequences-component. Discussion Thereby, the results exemplify the usefulness of the ADC Model in the work context by showing how the distinct components of morality affect moral judgment.more » « less
-
A large body of research has investigated responses to artificial scenarios (e.g., trolley problem) where maximizing beneficial outcomes for the greater good (utilitarianism) conflicts with adherence to moral norms (deontology). The CNI model is a computational model that quantifies sensitivity to consequences for the greater good ( C), sensitivity to moral norms ( N), and general preference for inaction versus action ( I) in responses to plausible moral dilemmas based on real-world events. Expanding on a description of the CNI model, the current article provides (a) a comprehensive review of empirical findings obtained with the CNI model, (b) an analysis of their theoretical implications, (c) a discussion of criticisms of the CNI model, and (d) an overview of alternative approaches to disentangle multiple factors underlying moral-dilemma responses and the relation of these approaches to the CNI model. The article concludes with a discussion of open questions and new directions for future research. Public AbstractHow do people make judgments about actions that violate moral norms yet maximize the greater good (e.g., sacrificing the well-being of a small number of people for the well-being of a larger number of people)? Research on this question has been criticized for relying on highly artificial scenarios and for conflating multiple distinct factors underlying responses in moral dilemmas. The current article reviews research that used a computational modeling approach to disentangle the roles of multiple distinct factors in responses to plausible moral dilemmas based on real-world events. By disentangling sensitivity to consequences, sensitivity to moral norms, and general preference for inaction versus action in responses to realistic dilemmas, the reviewed work provides a more nuanced understanding of how people make judgments about the right course of action in moral dilemmas.more » « less
An official website of the United States government

