skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Failures in the Loop: Human Leadership in AI-Based Decision-Making
The dark side of AI has been a persistent focus in discussions of popular science and academia (Appendix A), with some claiming that AI is “evil” [1] . Many commentators make compelling arguments for their concerns. Techno-elites have also contributed to the polarization of these discussions, with ultimatums that in this new era of industrialized AI, citizens will need to “[join] with the AI or risk being left behind” [2] . With such polarizing language, debates about AI adoption run the risk of being oversimplified. Discussion of technological trust frequently takes an all-or-nothing approach. All technologies – cognitive, social, material, or digital – introduce tradeoffs when they are adopted, and contain both ‘light and dark’ features [3] . But descriptions of these features can take on deceptively (or unintentionally) anthropomorphic tones, especially when stakeholders refer to the features as ‘agents’ [4] , [5] . When used as an analogical heuristic, this can inform the design of AI, provide knowledge for AI operations, and potentially even predict its outcomes [6] . However, if AI agency is accepted at face value, we run the risk of having unrealistic expectations for the capabilities of these systems.  more » « less
Award ID(s):
1828010
PAR ID:
10514462
Author(s) / Creator(s):
; ;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
IEEE Transactions on Technology and Society
Volume:
5
Issue:
1
ISSN:
2637-6415
Page Range / eLocation ID:
2 to 13
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Artificial intelligence (AI) methods have revolutionized and redefined the landscape of data analysis in business, healthcare, and technology. These methods have innovated the applied mathematics, computer science, and engineering fields and are showing considerable potential for risk science, especially in the disaster risk domain. The disaster risk field has yet to define itself as a necessary application domain for AI implementation by defining how to responsibly balance AI and disaster risk. (1) How is AI being used for disaster risk applications; and how are these applications addressing the principles and assumptions of risk science, (2) What are the benefits of AI being used for risk applications; and what are the benefits of applying risk principles and assumptions for AI‐based applications, (3) What are the synergies between AI and risk science applications, and (4) What are the characteristics of effective use of fundamental risk principles and assumptions for AI‐based applications? This study develops and disseminates an online survey questionnaire that leverages expertise from risk and AI professionals to identify the most important characteristics related to AI and risk, then presents a framework for gauging how AI and disaster risk can be balanced. This study is the first to develop a classification system for applying risk principles for AI‐based applications. This classification contributes to understanding of AI and risk by exploring how AI can be used to manage risk, how AI methods introduce new or additional risk, and whether fundamental risk principles and assumptions are sufficient for AI‐based applications. 
    more » « less
  2. BackgroundMachine learning approaches, including deep learning, have demonstrated remarkable effectiveness in the diagnosis and prediction of diabetes. However, these approaches often operate as opaque black boxes, leaving health care providers in the dark about the reasoning behind predictions. This opacity poses a barrier to the widespread adoption of machine learning in diabetes and health care, leading to confusion and eroding trust. ObjectiveThis study aimed to address this critical issue by developing and evaluating an explainable artificial intelligence (AI) platform, XAI4Diabetes, designed to empower health care professionals with a clear understanding of AI-generated predictions and recommendations for diabetes care. XAI4Diabetes not only delivers diabetes risk predictions but also furnishes easily interpretable explanations for complex machine learning models and their outcomes. MethodsXAI4Diabetes features a versatile multimodule explanation framework that leverages machine learning, knowledge graphs, and ontologies. The platform comprises the following four essential modules: (1) knowledge base, (2) knowledge matching, (3) prediction, and (4) interpretation. By harnessing AI techniques, XAI4Diabetes forecasts diabetes risk and provides valuable insights into the prediction process and outcomes. A structured, survey-based user study assessed the app’s usability and influence on participants’ comprehension of machine learning predictions in real-world patient scenarios. ResultsA prototype mobile app was meticulously developed and subjected to thorough usability studies and satisfaction surveys. The evaluation study findings underscore the substantial improvement in medical professionals’ comprehension of key aspects, including the (1) diabetes prediction process, (2) data sets used for model training, (3) data features used, and (4) relative significance of different features in prediction outcomes. Most participants reported heightened understanding of and trust in AI predictions following their use of XAI4Diabetes. The satisfaction survey results further revealed a high level of overall user satisfaction with the tool. ConclusionsThis study introduces XAI4Diabetes, a versatile multi-model explainable prediction platform tailored to diabetes care. By enabling transparent diabetes risk predictions and delivering interpretable insights, XAI4Diabetes empowers health care professionals to comprehend the AI-driven decision-making process, thereby fostering transparency and trust. These advancements hold the potential to mitigate biases and facilitate the broader integration of AI in diabetes care. 
    more » « less
  3. null (Ed.)
    We study how an agent learns from endogenous data when their prior belief is misspecified. We show that only uniform Berk–Nash equilibria can be long‐run outcomes, and that all uniformly strict Berk–Nash equilibria have an arbitrarily high probability of being the long‐run outcome for some initial beliefs. When the agent believes the outcome distribution is exogenous, every uniformly strict Berk–Nash equilibrium has positive probability of being the long‐run outcome for any initial belief. We generalize these results to settings where the agent observes a signal before acting. 
    more » « less
  4. HCI research has explored AI as a design material, suggesting that designers can envision AI’s design opportunities to improve UX. Recent research claimed that enterprise applications offer an opportunity for AI innovation at the user experience level. We conducted design workshops to explore the practices of experienced designers who work on cross-functional AI teams in the enterprise. We discussed how designers successfully work with and struggle with AI. Our findings revealed that designers can innovate at the system and service levels. We also discovered that making a case for an AI feature’s return on investment is a barrier for designers when they propose AI concepts and ideas. Our discussions produced novel insights on designers’ role on AI teams, and the boundary objects they used for collaborating with data scientists. We discuss the implications of these findings as opportunities for future research aiming to empower designers in working with data and AI. 
    more » « less
  5. There have been increased calls to include sociotechnical thinking–grappling with issues of power, history, and culture–throughout the undergraduate engineering curriculum. One way this more expansive framing of engineering has been integrated into engineering courses is through in-class discussions. There is a need to understand what students are attending to in these conversations. In particular, we are interested in how students frame and justify their arguments in small-group discussions. This study is part of an NSF-funded research project to implement and study integrating sociotechnical components throughout a first-year computing for engineers course. In one iteration of the revised course, each week students read a news article on a current example of the uneven impacts of technology, then engaged in in-class small-group discussions. In this study, we analyze students’ discourse to answer the research questions: What arguments do students use to argue against the use of a technology? How do these arguments relate to common narratives about technology? In this qualitative case study, we analyzed videorecordings of the small group discussions of two focus groups discussing the use of AI in hiring. We looked closely at the justifications students gave for their stated positions and how they relate to the common narratives of technocracy, free market idealism, technological neutrality, and technological determinism. We found all students in both groups rejected these common narratives. We saw students argue that (1) AI technology does not solve the hiring problem well, (2) it is important to regulate AI, (3) using AI for hiring will stagnate diversity, and (4) using AI for hiring unfairly privileges some groups of people over others. While students in both groups rejected the common narratives, only one group explicitly centered those who are harmed and how this harm would likely occur, and this group did so consistently. The other group managed to consistently reject the narratives using vague, safe language and never explicitly mentioned who is harmed by the technology. As a result, only one group’s discussion was clearly centered on justice concerns. These results have implications for how to scaffold small group sociotechnical discussions, what instructors should attend to during these discussions, and how to support students to orient toward systemic impacts and sustain a focus on justice. 
    more » « less