This study highlights how middle schoolers discuss the benefits and drawbacks of AI-driven conversational agents in learning. Using thematic analysis of focus groups, we identified five themes in students’ views of AI applications in education. Students recognized the benefits of AI in making learning more engaging and providing personalized, adaptable scaffolding. They emphasized that AI use in education needs to be safe and equitable. Students identified the potential of AI in supporting teachers and noted that AI educational agents fall short when compared to emotionally and intellectually complex humans. Overall, we argue that even without technical expertise, middle schoolers can articulate deep, multifaceted understandings of the possibilities and pitfalls of AI in education. Centering student voices in AI design can also provide learners with much-desired agency over their future learning experiences.
more »
« less
Concerning the Responsible Use of AI in the U.S. Criminal Justice System
Seeking insight into AI decision-making processes to better address bias and improve accountability in AI systems.
more »
« less
- Award ID(s):
- 2300842
- PAR ID:
- 10657200
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- Communications of the ACM
- Volume:
- 68
- Issue:
- 9
- ISSN:
- 0001-0782
- Page Range / eLocation ID:
- 41 to 44
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
AI assistance in decision-making has become popular, yet people's inappropriate reliance on AI often leads to unsatisfactory human-AI collaboration performance. In this paper, through three pre-registered, randomized human subject experiments, we explore whether and how the provision of second opinions may affect decision-makers' behavior and performance in AI-assisted decision-making. We find that if both the AI model's decision recommendation and a second opinion are always presented together, decision-makers reduce their over-reliance on AI while increase their under-reliance on AI, regardless whether the second opinion is generated by a peer or another AI model. However, if decision-makers have the control to decide when to solicit a peer's second opinion, we find that their active solicitations of second opinions have the potential to mitigate over-reliance on AI without inducing increased under-reliance in some cases. We conclude by discussing the implications of our findings for promoting effective human-AI collaborations in decision-making.more » « less
-
Abstract The NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) focuses on creating trustworthy AI for a variety of environmental and Earth science phenomena. AI2ES includes leading experts from AI, atmospheric and ocean science, risk communication, and education, who work synergistically to develop and test trustworthy AI methods that transform our understanding and prediction of the environment. Trust is a social phenomenon, and our integration of risk communication research across AI2ES activities provides an empirical foundation for developing user‐informed, trustworthy AI. AI2ES also features activities to broaden participation and for workforce development that are fully integrated with AI2ES research on trustworthy AI, environmental science, and risk communication.more » « less
-
Abstract This paper explores the evolution of Geodesign in addressing spatial and environmental challenges from its early foundations to the recent integration of artificial intelligence (AI). AI enhances existing Geodesign methods by automating spatial data analysis, improving land use classification, refining heat island effect assessment, optimizing energy use, facilitating green infrastructure planning, and generating design scenarios. Despite the transformative potential of AI in Geodesign, challenges related to data quality, model interpretability, and ethical concerns such as privacy and bias persist. This paper highlights case studies that demonstrate the application of AI in Geodesign, offering insights into its role in understanding existing systems and designing future changes. The paper concludes by advocating for the responsible and transparent integration of AI to ensure equitable and effective Geodesign outcomes.more » « less
-
With the rapid development of decision aids that are driven by AI models, the practice of AI-assisted decision making has become increasingly prevalent. To improve the human-AI team performance in decision making, earlier studies mostly focus on enhancing humans' capability in better utilizing a given AI-driven decision aid. In this paper, we tackle this challenge through a complementary approach—we aim to train behavior-aware AI by adjusting the AI model underlying the decision aid to account for humans' behavior in adopting AI advice. In particular, as humans are observed to accept AI advice more when their confidence in their own judgement is low, we propose to train AI models with a human-confidence-based instance weighting strategy, instead of solving the standard empirical risk minimization problem. Under an assumed, threshold-based model characterizing when humans will adopt the AI advice, we first derive the optimal instance weighting strategy for training AI models. We then validate the efficacy and robustness of our proposed method in improving the human-AI joint decision making performance through systematic experimentation on synthetic datasets. Finally, via randomized experiments with real human subjects along with their actual behavior in adopting the AI advice, we demonstrate that our method can significantly improve the decision making performance of the human-AI team in practice.more » « less
An official website of the United States government

