skip to main content


Title: Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI
Explainable AI (XAI) systems are sociotechnical in nature; thus, they are subject to the sociotechnical gap-divide between the technical affordances and the social needs. However, charting this gap is challenging. In the context of XAI, we argue that charting the gap improves our problem understanding, which can reflexively provide actionable insights to improve explainability. Utilizing two case studies in distinct domains, we empirically derive a framework that facilitates systematic charting of the sociotechnical gap by connecting AI guidelines in the context of XAI and elucidating how to use them to address the gap. We apply the framework to a third case in a new domain, showcasing its affordances. Finally, we discuss conceptual implications of the framework, share practical considerations in its operationalization, and offer guidance on transferring it to new contexts. By making conceptual and practical contributions to understanding the sociotechnical gap in XAI, the framework expands the XAI design space.  more » « less
Award ID(s):
1928586
NSF-PAR ID:
10434402
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proceedings of the ACM on Human-Computer Interaction
Volume:
7
Issue:
CSCW1
ISSN:
2573-0142
Page Range / eLocation ID:
1 to 32
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Neural network architectures are achieving superhuman performance on an expanding range of tasks. To effectively and safely deploy these systems, their decision‐making must be understandable to a wide range of stakeholders. Methods to explain artificial intelligence (AI) have been proposed to answer this challenge, but a lack of theory impedes the development of systematic abstractions, which are necessary for cumulative knowledge gains. We propose Bayesian Teaching as a framework for unifying explainable AI (XAI) by integrating machine learning and human learning. Bayesian Teaching formalizes explanation as a communication act of an explainer to shift the beliefs of an explainee. This formalization decomposes a wide range of XAI methods into four components: (a) the target inference, (b) the explanation, (c) the explainee model, and (d) the explainer model. The abstraction afforded by Bayesian Teaching to decompose XAI methods elucidates the invariances among them. The decomposition of XAI systems enables modular validation, as each of the first three components listed can be tested semi‐independently. This decomposition also promotes generalization through recombination of components from different XAI systems, which facilitates the generation of novel variants. These new variants need not be evaluated one by one provided that each component has been validated, leading to an exponential decrease in development time. Finally, by making the goal of explanation explicit, Bayesian Teaching helps developers to assess how suitable an XAI system is for its intended real‐world use case. Thus, Bayesian Teaching provides a theoretical framework that encourages systematic, scientific investigation of XAI.

     
    more » « less
  2. Abstract

    As professional science becomes increasingly computational, researchers and educators are advocating for the integration of computational thinking (CT) into science education. Researchers and policymakers have argued that CT learning opportunities should begin in elementary school and span across the K‐12 grades. While researchers and policymakers have specified how students should engage in CT for science learning, the success of CT integration ultimately depends on how elementary teachers implement CT in their science lessons. This new demand for teachers who can integrate CT has created a need for effective conceptual tools that teacher educators and professional development designers can use to develop elementary teachers' understanding and operationalization of CT for their classrooms. However, existing frameworks for CT integration have limitations. Existing frameworks either overlook the elementary grades, conceptualize CT in isolation and not integrated into science, and/or have not been tested in teacher education contexts. After reviewing existing CT integration frameworks and detailing an important gap in the science teacher education literature, we present our framework for the integration of CT into elementary science education, with a special focus on how to use this framework with teachers. Situated within our design‐based research study, we (a) explain the decision‐making process of designing the framework; (b) describe the pedagogical affordances and challenges it provided as we implemented it with a cohort of pre‐ and in‐service teachers; (c) provide suggestions for its use in teacher education contexts; and (d) theorize possible pathways to continue its refinement.

     
    more » « less
  3. The local explanation provides heatmaps on images to explain how Convolutional Neural Networks (CNNs) derive their output. Due to its visual straightforwardness, the method has been one of the most popular explainable AI (XAI) methods for diagnosing CNNs. Through our formative study (S1), however, we captured ML engineers' ambivalent perspective about the local explanation as a valuable and indispensable envision in building CNNs versus the process that exhausts them due to the heuristic nature of detecting vulnerability. Moreover, steering the CNNs based on the vulnerability learned from the diagnosis seemed highly challenging. To mitigate the gap, we designed DeepFuse, the first interactive design that realizes the direct feedback loop between a user and CNNs in diagnosing and revising CNN's vulnerability using local explanations. DeepFuse helps CNN engineers to systemically search unreasonable local explanations and annotate the new boundaries for those identified as unreasonable in a labor-efficient manner. Next, it steers the model based on the given annotation such that the model doesn't introduce similar mistakes. We conducted a two-day study (S2) with 12 experienced CNN engineers. Using DeepFuse, participants made a more accurate and reasonable model than the current state-of-the-art. Also, participants found the way DeepFuse guides case-based reasoning can practically improve their current practice. We provide implications for design that explain how future HCI-driven design can move our practice forward to make XAI-driven insights more actionable.

     
    more » « less
  4. Practical ingenuity is demonstrated in engineering design through many ways. Students and practitioners alike create many iterations of prototypes in solving problems and design challenges. While focus is on the end product and/or the process employed along the way, this study combines these interests to better understand the product and process through the roles of initial prototyping through the creation of such things as alpha prototypes, conceptual mock-ups, and other rapid prototypes. We explore the purposes and affordances of these low-fidelity prototypes in engineering design activity through both synthesis of different perspectives from literature to compose an integrated framework to characterize prototypes that are developed as part of ideation in designing, as well as historic and student examples and case studies. Studying prototyping (activity) and prototypes (artifacts) is a way to studying design thinking and how students and practitioners learn and apply a problem solving process to their work. Prototyping can make readily evident and explicit (through act of creating and the creations themselves) some of the thinking and insights of the engineering designer into the design problem. Initial, low-fidelity prototypes are characterized as prototypes that are not always elaborate depictions containing all the fine details of the design. In fact, features in a prototype do not always appear in the final design. The underpinning of this work is that prototyping, as a process, is an act of externalizing design thinking, embodying it through physical objects. While several prescriptive frameworks have been developed to describe what prototypes prototype and the role of prototype, the role of low-fidelity prototypes, specifically, lacks sufficient attention. We will present prototyping rather as an holistic mindset that can be a means to approach problem solving in a more accessible manner. It can be helpful to apply this sort of mindset approach from these initial problem understanding through functional decomposition to quickly communicate and learn by trial and building in learning loops to oneself, with an engineering design team, and to potential stakeholders outside the team. 
    more » « less
  5. Abstract  
    more » « less