skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2043612

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract The imminent deployment of autonomous vehicles requires algorithms capable of making moral decisions in relevant traffic situations. Some scholars in the ethics of autonomous vehicles hope to align such intelligent systems with human moral judgment. For this purpose, studies like the Moral Machine Experiment have collected data about human decision-making in trolley-like traffic dilemmas. This paper first argues that the trolley dilemma is an inadequate experimental paradigm for investigating traffic moral judgments because it does not include agents’ character-based considerations and is incapable of facilitating the investigation of low-stakes mundane traffic scenarios. In light of the limitations of the trolley paradigm, this paper presents an alternative experimental framework that addresses these issues. The proposed solution combines the creation of mundane traffic moral scenarios using virtual reality and the Agent-Deed-Consequences (ADC) model of moral judgment as a moral-psychological framework. This paradigm shift potentially increases the ecological validity of future studies by providing more realism and incorporating character considerations into traffic actions. 
    more » « less
  2. Integrating artificial intelligence (AI) technologies into law enforcement has become a concern of contemporary politics and public discourse. In this paper, we qualitatively examine the perspectives of AI technologies based on 20 semi-structured interviews of law enforcement professionals in North Carolina. We investigate how integrating AI technologies, such as predictive policing and autonomous vehicle (AV) technology, impacts the relationships between communities and police jurisdictions. The evidence suggests that police officers maintain that AI plays a limited role in policing but believe the technologies will continue to expand, improving public safety and increasing policing capability. Conversely, police officers believe that AI will not necessarily increase trust between police and the community, citing ethical concerns and the potential to infringe on civil rights. It is thus argued that the trends toward integrating AI technologies into law enforcement are not without risk. Policymaking guided by public consensus and collaborative discussion with law enforcement professionals must aim to promote accountability through the application of responsible design of AI in policing with an end state of providing societal benefits and mitigating harm to the populace. Society has a moral obligation to mitigate the detrimental consequences of fully integrating AI technologies into law enforcement. 
    more » « less
  3. Introduction Moral judgment is of critical importance in the work context because of its implicit or explicit omnipresence in a wide range of work-place practices. The moral aspects of actual behaviors, intentions, and consequences represent areas of deep preoccupation, as exemplified in current corporate social responsibility programs, yet there remain ongoing debates on the best understanding of how such aspects of morality (behaviors, intentions, and consequences) interact. The ADC Model of moral judgment integrates the theoretical insights of three major moral theories (virtue ethics, deontology, and consequentialism) into a single model, which explains how moral judgment occurs in parallel evaluation processes of three different components: the character of a person (Agent-component); their actions (Deed-component); and the consequences brought about in the situation (Consequences-component). The model offers the possibility of overcoming difficulties encountered by single or dual-component theories. Methods We designed a 2 × 2 × 2-between-subjects design vignette experiment with a Germany-wide sample of employed respondents ( N = 1,349) to test this model. Results Results showed that the Deed-component affects willingness to cooperate in the work context, which is mediated via moral judgments. These effects also varied depending on the levels of the Agent- and Consequences-component. Discussion Thereby, the results exemplify the usefulness of the ADC Model in the work context by showing how the distinct components of morality affect moral judgment. 
    more » « less
  4. Abstract Ethical considerations are the fabric of society, and they foster cooperation, help, and sacrifice for the greater good. Advances in AI create a greater need to examine ethical considerations involving the development and implementation of such systems. Integrating ethics into artificial intelligence-based programs is crucial for preventing negative outcomes, such as privacy breaches and biased decision making. Human–AI teaming (HAIT) presents additional challenges, as the ethical principles and moral theories that provide justification for them are not yet computable by machines. To that effect, models of human judgments and decision making, such as the agent-deed-consequence (ADC) model, will be crucial to inform the ethical guidance functions in AI team mates and to clarify how and why humans (dis)trust machines. The current paper will examine the ADC model as it is applied to the context of HAIT, and the challenges associated with the use of human-centric ethical considerations when applied to an AI context. 
    more » « less
  5. Research in empirical moral psychology has consistently found negative correlations between morality and both risk-taking, as well as psychopathic tendencies. However, prior research did not sufficiently explore intervening or moderating factors. Additionally, prior measures of moral preference (e.g., sacrificial dilemmas) have a pronounced lack of ecological validity. This study seeks to address these two gaps in the literature. First, this study used Preference for Precepts Implied in Moral Theories (PPIMT), which offers a novel, more nuanced and ecologically valid measure of moral judgment. Second, the current study examined if risk taking moderates the relationships between psychopathic tendencies and moral judgment. Results indicated that models which incorporated risk-taking as a moderator between psychopathic tendencies and moral judgment were a better fit to the data than those that incorporated psychopathic tendencies and risk-taking as exogenous variables, suggesting that the association between psychopathic tendencies and moral judgment is influenced by level of risk-taking. Therefore, future research investigating linkages between psychopathic tendencies and moral precepts may do well to incorporate risk-taking and risky behaviors to further strengthen the understanding of moral judgment in these individuals. 
    more » « less