skip to main content


Search for: All records

Award ID contains: 2026324

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. null (Ed.)
  2. null (Ed.)
    Purpose of Review: A transdisciplinary systems approach to the design of an artificial intelligence (AI) decision support system can more effectively address the limitations of AI systems. By incorporating stakeholder input early in the process, the final product is more likely to improve decision-making and effectively reduce kidney discard. Recent Findings: Kidney discard is a complex problem that will require increased coordination between transplant stakeholders. An AI decision support system has significant potential, but there are challenges associated with overfitting, poor explainability, and inadequate trust. A transdisciplinary approach provides a holistic perspective that incorporates expertise from engineering, social science, and transplant healthcare. A systems approach leverages techniques for visualizing the system architecture to support solution design from multiple perspectives. Summary: Developing a systems-based approach to AI decision support involves engaging in a cycle of documenting the system architecture, identifying pain points, developing prototypes, and validating the system. Early efforts have focused on describing process issues to prioritize tasks that would benefit from AI support. 
    more » « less
  3. Keathley, H. ; Enos, J. ; Parrish, M. (Ed.)
    The role of human-machine teams in society is increasing, as big data and computing power explode. One popular approach to AI is deep learning, which is useful for classification, feature identification, and predictive modeling. However, deep learning models often suffer from inadequate transparency and poor explainability. One aspect of human systems integration is the design of interfaces that support human decision-making. AI models have multiple types of uncertainty embedded, which may be difficult for users to understand. Humans that use these tools need to understand how much they should trust the AI. This study evaluates one simple approach for communicating uncertainty, a visual confidence bar ranging from 0-100%. We perform a human-subject online experiment using an existing image recognition deep learning model to test the effect of (1) providing single vs. multiple recommendations from the AI and (2) including uncertainty information. For each image, participants described the subject in an open textbox and rated their confidence in their answers. Performance was evaluated at four levels of accuracy ranging from the same as the image label to the correct category of the image. The results suggest that AI recommendations increase accuracy, even if the human and AI have different definitions of accuracy. In addition, providing multiple ranked recommendations, with or without the confidence bar, increases operator confidence and reduces perceived task difficulty. More research is needed to determine how people approach uncertain information from an AI system and develop effective visualizations for communicating uncertainty. 
    more » « less