skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Thursday, January 15 until 2:00 AM ET on Friday, January 16 due to maintenance. We apologize for the inconvenience.


This content will become publicly available on September 1, 2026

Title: Concerning the Responsible Use of AI in the U.S. Criminal Justice System
Seeking insight into AI decision-making processes to better address bias and improve accountability in AI systems.  more » « less
Award ID(s):
2300842
PAR ID:
10657200
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
ACM
Date Published:
Journal Name:
Communications of the ACM
Volume:
68
Issue:
9
ISSN:
0001-0782
Page Range / eLocation ID:
41 to 44
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract In response to Li, Reigh, He, and Miller's commentary,Can we and should we use artificial intelligence for formative assessment in science, we argue that artificial intelligence (AI) is already being widely employed in formative assessment across various educational contexts. While agreeing with Li et al.'s call for further studies on equity issues related to AI, we emphasize the need for science educators to adapt to the AI revolution that has outpaced the research community. We challenge the somewhat restrictive view of formative assessment presented by Li et al., highlighting the significant contributions of AI in providing formative feedback to students, assisting teachers in assessment practices, and aiding in instructional decisions. We contend that AI‐generated scores should not be equated with the entirety of formative assessment practice; no single assessment tool can capture all aspects of student thinking and backgrounds. We address concerns raised by Li et al. regarding AI bias and emphasize the importance of empirical testing and evidence‐based arguments in referring to bias. We assert that AI‐based formative assessment does not necessarily lead to inequity and can, in fact, contribute to more equitable educational experiences. Furthermore, we discuss how AI can facilitate the diversification of representational modalities in assessment practices and highlight the potential benefits of AI in saving teachers’ time and providing them with valuable assessment information. We call for a shift in perspective, from viewing AI as a problem to be solved to recognizing its potential as a collaborative tool in education. We emphasize the need for future research to focus on the effective integration of AI in classrooms, teacher education, and the development of AI systems that can adapt to diverse teaching and learning contexts. We conclude by underlining the importance of addressing AI bias, understanding its implications, and developing guidelines for best practices in AI‐based formative assessment. 
    more » « less
  2. This study highlights how middle schoolers discuss the benefits and drawbacks of AI-driven conversational agents in learning. Using thematic analysis of focus groups, we identified five themes in students’ views of AI applications in education. Students recognized the benefits of AI in making learning more engaging and providing personalized, adaptable scaffolding. They emphasized that AI use in education needs to be safe and equitable. Students identified the potential of AI in supporting teachers and noted that AI educational agents fall short when compared to emotionally and intellectually complex humans. Overall, we argue that even without technical expertise, middle schoolers can articulate deep, multifaceted understandings of the possibilities and pitfalls of AI in education. Centering student voices in AI design can also provide learners with much-desired agency over their future learning experiences. 
    more » « less
  3. AI assistance in decision-making has become popular, yet people's inappropriate reliance on AI often leads to unsatisfactory human-AI collaboration performance. In this paper, through three pre-registered, randomized human subject experiments, we explore whether and how the provision of second opinions may affect decision-makers' behavior and performance in AI-assisted decision-making. We find that if both the AI model's decision recommendation and a second opinion are always presented together, decision-makers reduce their over-reliance on AI while increase their under-reliance on AI, regardless whether the second opinion is generated by a peer or another AI model. However, if decision-makers have the control to decide when to solicit a peer's second opinion, we find that their active solicitations of second opinions have the potential to mitigate over-reliance on AI without inducing increased under-reliance in some cases. We conclude by discussing the implications of our findings for promoting effective human-AI collaborations in decision-making. 
    more » « less
  4. Abstract The autoinducer‐2 (AI‐2) quorum sensing system is involved in a range of population‐based bacterial behaviors and has been engineered for cell–cell communication in synthetic biology systems. Investigation into the cellular mechanisms of AI‐2 processing has determined that overexpression of uptake genes increases AI‐2 uptake rate, and genomic deletions of degradation genes lowers the AI‐2 level required for activation of reporter genes. Here, we combine these two strategies to engineer anEscherichia colistrain with enhanced ability to detect and respond to AI‐2. In anE. colistrain that does not produce AI‐2, we monitored AI‐2 uptake and reporter protein expression in a strain that overproduced the AI‐2 uptake or phosphorylation units LsrACDB or LsrK, a strain with the deletion of AI‐2 degradation units LsrF and LsrG, and an “enhanced” strain with both overproduction of AI‐2 uptake and deletion of AI‐2 degradation elements. By adding up to 40 μM AI‐2 to growing cell cultures, we determine that this “enhanced” AI‐2 sensitive strain both uptakes AI‐2 more rapidly and responds with increased reporter protein expression than the others. This work expands the toolbox for manipulating AI‐2 quorum sensing processes both in native environments and for synthetic biology applications. 
    more » « less
  5. Abstract The NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) focuses on creating trustworthy AI for a variety of environmental and Earth science phenomena. AI2ES includes leading experts from AI, atmospheric and ocean science, risk communication, and education, who work synergistically to develop and test trustworthy AI methods that transform our understanding and prediction of the environment. Trust is a social phenomenon, and our integration of risk communication research across AI2ES activities provides an empirical foundation for developing user‐informed, trustworthy AI. AI2ES also features activities to broaden participation and for workforce development that are fully integrated with AI2ES research on trustworthy AI, environmental science, and risk communication. 
    more » « less