Neural network architectures are achieving superhuman performance on an expanding range of tasks. To effectively and safely deploy these systems, their decision‐making must be understandable to a wide range of stakeholders. Methods to explain artificial intelligence (AI) have been proposed to answer this challenge, but a lack of theory impedes the development of systematic abstractions, which are necessary for cumulative knowledge gains. We propose Bayesian Teaching as a framework for unifying explainable AI (XAI) by integrating machine learning and human learning. Bayesian Teaching formalizes explanation as a communication act of an explainer to shift the beliefs of an explainee. This formalization decomposes a wide range of XAI methods into four components: (a) the target inference, (b) the explanation, (c) the explainee model, and (d) the explainer model. The abstraction afforded by Bayesian Teaching to decompose XAI methods elucidates the invariances among them. The decomposition of XAI systems enables modular validation, as each of the first three components listed can be tested semi‐independently. This decomposition also promotes generalization through recombination of components from different XAI systems, which facilitates the generation of novel variants. These new variants need not be evaluated one by one provided that each component has been validated, leading to an exponential decrease in development time. Finally, by making the goal of explanation explicit, Bayesian Teaching helps developers to assess how suitable an XAI system is for its intended real‐world use case. Thus, Bayesian Teaching provides a theoretical framework that encourages systematic, scientific investigation of XAI.
Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI
Explainable AI (XAI) systems are sociotechnical in nature; thus, they are subject to the sociotechnical gap-divide between the technical affordances and the social needs. However, charting this gap is challenging. In the context of XAI, we argue that charting the gap improves our problem understanding, which can reflexively provide actionable insights to improve explainability. Utilizing two case studies in distinct domains, we empirically derive a framework that facilitates systematic charting of the sociotechnical gap by connecting AI guidelines in the context of XAI and elucidating how to use them to address the gap. We apply the framework to a third case in a new domain, showcasing its affordances. Finally, we discuss conceptual implications of the framework, share practical considerations in its operationalization, and offer guidance on transferring it to new contexts. By making conceptual and practical contributions to understanding the sociotechnical gap in XAI, the framework expands the XAI design space.
more »
« less
- Award ID(s):
- 1928586
- PAR ID:
- 10434402
- Date Published:
- Journal Name:
- Proceedings of the ACM on Human-Computer Interaction
- Volume:
- 7
- Issue:
- CSCW1
- ISSN:
- 2573-0142
- Page Range / eLocation ID:
- 1 to 32
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
Non-consensual intimate media (NCIM) involves sharing intimate content without the depicted person's consent, including 'revenge porn' and sexually explicit deepfakes. While NCIM has received attention in legal, psychological, and communication fields over the past decade, it is not sufficiently addressed in computing scholarship. This paper addresses this gap by linking NCIM harms to the specific technological components that facilitate them. We introduce themore » « less
sociotechnical stack , a conceptual framework designed to map the technical stack to its corresponding social impacts. The sociotechnical stack allows us to analyze sociotechnical problems like NCIM, and points toward opportunities for computing research. We propose a research roadmap for computing and social computing communities to deter NCIM perpetration and support victim-survivors through building and rebuilding technologies. -
Artificial intelligence (AI) is an emerging technology that has great potential in reducing energy consumption, environmental burdens, and operational risks of chemical production. However, large-scale applications of AI are still limited. One barrier is the lack of quantitative understandings of the potential benefits and risks of different AI applications. This study reviewed relevant AI literature and categorized those case studies by application types, impact categories, and application modes. Most studies assessed the energy, economic, and safety implications of AI applications, while few of them have evaluated the environmental impacts of AI, given the large data gaps and difficulties in choosing appropriate assessment methods. Based on the reviewed case studies in the chemical industry, we proposed a conceptual framework that encompasses approaches from industrial ecology, economics, and engineering to guide the selection of performance indicators and evaluation methods for a holistic assessment of AI's impacts. This framework could be a valuable tool to support the decision-making related to AI in the fundamental research and practical production of chemicals. Although this study focuses on the chemical industry, the insights of the literature review and the proposed framework could be applied to AI applications in other industries and broad industrial ecology fields. In the end, this study highlights future research directions for addressing the data challenges in assessing AI's impacts and developing AI-enhanced tools to support the sustainable development of the chemical industry.more » « less
-
The need for interpretable and accountable intelligent systems grows along with the prevalence of artificial intelligence ( AI ) applications used in everyday life. Explainable AI ( XAI ) systems are intended to self-explain the reasoning behind system decisions and predictions. Researchers from different disciplines work together to define, design, and evaluate explainable systems. However, scholars from different disciplines focus on different objectives and fairly independent topics of XAI research, which poses challenges for identifying appropriate design and evaluation methodology and consolidating knowledge across efforts. To this end, this article presents a survey and framework intended to share knowledge and experiences of XAI design and evaluation methods across multiple disciplines. Aiming to support diverse design goals and evaluation methods in XAI research, after a thorough review of XAI related papers in the fields of machine learning, visualization, and human-computer interaction, we present a categorization of XAI design goals and evaluation methods. Our categorization presents the mapping between design goals for different XAI user groups and their evaluation methods. From our findings, we develop a framework with step-by-step design guidelines paired with evaluation methods to close the iterative design and evaluation cycles in multidisciplinary XAI teams. Further, we provide summarized ready-to-use tables of evaluation methods and recommendations for different goals in XAI research.more » « less
-
AI-driven tools are increasingly deployed to support low-skilled community health workers (CHWs) in hard-to-reach communities in the Global South. This paper examines how CHWs in rural India engage with and perceive AI explanations and how we might design explainable AI (XAI) interfaces that are more understandable to them. We conducted semi-structured interviews with CHWs who interacted with a design probe to predict neonatal jaundice in which AI recommendations are accompanied by explanations. We (1) identify how CHWs interpreted AI predictions and the associated explanations, (2) unpack the benefits and pitfalls they perceived of the explanations, and (3) detail how different design elements of the explanations impacted their AI understanding. Our findings demonstrate that while CHWs struggled to understand the AI explanations, they nevertheless expressed a strong preference for the explanations to be integrated into AI-driven tools and perceived several benefits of the explanations, such as helping CHWs learn new skills and improved patient trust in AI tools and in CHWs. We conclude by discussing what elements of AI need to be made explainable to novice AI users like CHWs and outline concrete design recommendations to improve the utility of XAI for novice AI users in non-Western contexts.more » « less