skip to main content


Search for: All records

Award ID contains: 2026809

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The introduction of deep learning and CNNs to image recognition problems has led to state-of-the-art classification accuracy. However, CNNs exacerbate the issue of algorithm explainability due to deep learning’s black box nature. Numerous explainable AI (XAI) algorithms have been developed that provide developers insight into the operations of deep learning. We aim to make XAI explanations more user-centric by introducing modifications to existing XAI algorithms based on cognitive theory. The goal of this research is to yield intuitive XAI explanations that more closely resemble explanations given by experts in the domain of bird watching. Using an existing base XAI algorithm, we conducted two user studies with expert bird watchers and found that our novel averaged and contrasting XAI algorithms are significantly preferred over the base XAI algorithm for bird identification. 
    more » « less
  2. We argue that the dominant approach to explainable AI for explaining image classification, annotating images with heatmaps, provides little value for users unfamiliar with deep learning. We argue that explainable AI for images should produce output like experts produce when communicating with one another, with apprentices, and with novices. We provide an expanded set of goals of explainable AI systems and propose a Turing Test for explainable AI. 
    more » « less
  3. With the spread of COVID-19, significantly more patients have required medical diagnosis to determine whether they are a carrier of the virus. COVID-19 can lead to the development of pneumonia in the lungs, which can be captured in X-Ray and CT scans of the patient's chest. The abundance of X-Ray and CT image data available can be used to develop a high-performing computer vision model able to identify and classify instances of pneumonia present in medical scans. Predictions made by these deep learning models can increase the confidence of diagnoses made by analyzing minute features present in scans exhibiting COVID-19 pneumonia, often unnoticeable to the human eye. Furthermore, rather than teaching clinicians about the mathematics behind deep learning and heat maps, we introduce novel methods of explainable artificial intelligence (XAI) with the goal to annotate instances of pneumonia in medical scans exactly as radiologists do to inform other radiologists, clinicians, and interns about patterns and findings. This project explores methods to train and optimize state-of-the-art deep learning models on COVID-19 pneumonia medical scans and apply explainability algorithms to generate annotated explanations of model predictions that are useful to clinicians and radiologists in analyzing these images. 
    more » « less
  4. null (Ed.)