skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A User Interface to Communicate Interpretable AI Decisions to Radiologists
Tools for computer-aided diagnosis based on deep learning have become increasingly important in the medical field. Such tools can be useful, but require effective communication of their decision-making process in order to safely and meaningfully guide clinical decisions. We present a user interface that incorporates the IAIA-BL model, which interpretably predicts both mass margin and malignancy for breast lesions. The user interface displays the most relevant aspects of the model’s explanation including the predicted margin value, the AI confidence in the prediction, and the two most highly activated prototypes for each case. In addition, this user interface includes full-field and cropped images of the region of interest, as well as a questionnaire suitable for a reader study. Our preliminary results indicate that the model increases the readers’ confidence and accuracy in their decisions on margin and malignancy.  more » « less
Award ID(s):
2222336
PAR ID:
10424535
Author(s) / Creator(s):
; ; ; ; ; ; ;
Editor(s):
Chen, Yan; Mello-Thoms, Claudia R.
Date Published:
Journal Name:
Medical Imaging 2023: Image Perception, Observer Performance, and Technology Assessment
Volume:
12467
Page Range / eLocation ID:
124670P
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Mello-Thoms, Claudia R.; Taylor-Phillips, Sian (Ed.)
    There is increasing interest in using deep learning and computer vision to help guide clinical decisions, such as whether to order a biopsy based on a mammogram. Existing networks are typically black box, unable to explain how they make their predictions. We present an interpretable deep-learning network which explains its predictions in terms of BI-RADS features mass shape and mass margin. Our model predicts mass margin and mass shape, then uses the logits from those interpretable models to predict malignancy, also using an interpretable model. The interpretable mass margin model explains its predictions using a prototypical parts model. The interpretable mass shape model predicts segmentations, fits an ellipse, then determines shape based on the goodness of fit and eccentricity of the fitted ellipse. While including mass shape logits in the malignancy prediction model did not improve performance, we present this technique as part of a framework for better clinician-AI communication. 
    more » « less
  2. In the field of visualization, understanding users’ analytical reasoning is important for evaluating the effectiveness of visualization applications. Several studies have been conducted to capture and analyze user interactions to comprehend this reasoning process. However, few have successfully linked these interactions to users’ reasoning processes. This paper introduces an approach that addresses the limitation by correlating semantic user interactions with analysis decisions using an interactive wire transaction analysis system and a visual state transition matrix, both designed as visual analytics applications. The system enables interactive analysis for evaluating financial fraud in wire transactions. It also allows mapping captured user interactions and analytical decisions back onto the visualization to reveal their decision differences. The visual state transition matrix further aids in understanding users’ analytical flows, revealing their decision-making processes. Classification machine learning algorithms are applied to evaluate the effectiveness of our approach in understanding users’ analytical reasoning process by connecting the captured semantic user interactions to their decisions (i.e., suspicious, not suspicious, and inconclusive) on wire transactions. With the algorithms, an average of 72% accuracy is determined to classify the semantic user interactions. For classifying individual decisions, the average accuracy is 70%. Notably, the accuracy for classifying ‘inconclusive’ decisions is 83%. Overall, the proposed approach improves the understanding of users’ analytical decisions and provides a robust method for evaluating user interactions in visualization tools. 
    more » « less
  3. Cognitive radio networks (CRNs), which offer novel network architecture for utilising spectrums, have attracted significant attention in recent years. CRN users use spectrums opportunistically, which means they sense a channel, and if it is free, they start transmitting in that channel. In cooperative spectrum sensing, a secondary user (SU) decides about the presence of the primary user (PU) based on information from other SUs. Malicious SUs (MSUs) send false sensing information to other SUs so that they make wrong decisions about the spectrum status. As a result, an SU may transmit during the presence of the PU or may keep starving for the spectrum. In this paper, we propose a reputation-based mechanism which can minimise the effects of MSUs on decision making in cooperative spectrum sensing. Some of the SUs are selected as distributed fusion centres (DFCs), that are responsible for making decisions about the presence of PU and informing the reporting SUs. A DFC uses weighted majority voting among the reporting SUs, where weights are normalised reputation. The DFC updates reputations of SUs based on confidence of an election. If the majority wins by a significant margin, the confidence of the election is high. In this case, SUs that belong to the majority gain high reputations. We conduct extensive simulations to validate our proposed model. 
    more » « less
  4. Accurately measuring and understanding affective loads, such as cognitive and emotional loads, is crucial in the field of human–robot interaction (HRI) research. Although established assessment tools exist for gauging working memory capability in psychology and cognitive neuroscience, few tools are available to specifically measure affective loads. To address this gap, we propose a practical stimulus tool for teleoperated human–robot teams. The tool is comprised of a customizable graphical user interface and subjective questionnaires to measure affective loads. We validated that this tool can invoke different levels of affective loads through extensive user experiments. 
    more » « less
  5. Decaying infrastructure maintenance cost allocation depends heavily on accurate and safe inspection in the field. New tools to conduct inspections can assist in prioritizing investments in maintenance and repairs. The industrial revolution termed as “Industry 4.0” is based on the intelligence of machines working with humans in a collaborative workspace. Contrarily, infrastructure management has relied on the human for making day-to-day decisions. New emerging technologies can assist during infrastructure inspections, to quantify structural condition with more objective data. However, today’s owners agree in trusting the inspector’s decision in the field over data collected with sensors. If data collected in the field is accessible during the inspections, the inspector decisions can be improved with sensors. New research opportunities in the human–infrastructure interface would allow researchers to improve the human awareness of their surrounding environment during inspections. This article studies the role of Augmented Reality (AR) technology as a tool to increase human awareness of infrastructure in their inspection work. The domains of interest of this research include both infrastructure inspections (emphasis on the collection of data of structures to inform management decisions) and emergency management (focus on the data collection of the environment to inform human actions). This article describes the use of a head-mounted device to access real-time data and information during their field inspection. The authors leverage the use of low-cost smart sensors and QR code scanners integrated with Augmented Reality applications for augmented human interface with the physical environment. This article presents a novel interface architecture for developing Augmented Reality–enabled inspection to assist the inspector’s workflow in conducting infrastructure inspection works with two new applications and summarizes the results from various experiments. The main contributions of this work to computer-aided community are enabling inspectors to visualize data files from database and real-time data access using an Augmented Reality environment. 
    more » « less