skip to main content


Title: Think about the stakeholders first! Toward an algorithmic transparency playbook for regulatory compliance
Abstract Increasingly, laws are being proposed and passed by governments around the world to regulate artificial intelligence (AI) systems implemented into the public and private sectors. Many of these regulations address the transparency of AI systems, and related citizen-aware issues like allowing individuals to have the right to an explanation about how an AI system makes a decision that impacts them. Yet, almost all AI governance documents to date have a significant drawback: they have focused on what to do (or what not to do) with respect to making AI systems transparent, but have left the brunt of the work to technologists to figure out how to build transparent systems. We fill this gap by proposing a stakeholder-first approach that assists technologists in designing transparent, regulatory-compliant systems. We also describe a real-world case study that illustrates how this approach can be used in practice.  more » « less
Award ID(s):
1928614 2129076 1922658 1916505 1934464 1916647
NSF-PAR ID:
10432161
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Data & Policy
Volume:
5
ISSN:
2632-3249
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Research in artificial intelligence, as well as in economics and other related fields, generally proceeds from the premise that each agent has a well-defined identity, well-defined preferences over outcomes, and well-defined beliefs about the world. However, as we design AI systems, we in fact need to specify where the boundaries between one agent and another in the system lie, what objective functions these agents aim to maximize, and to some extent even what belief formation processes they use. The premise of this paper is that as AI is being broadly deployed in the world, we need well-founded theories of, and methodologies and algorithms for, how to design preferences, identities, and beliefs. This paper lays out an approach to address these problems from a rigorous foundation in decision theory, game theory, social choice theory, and the algorithmic and computational aspects of these fields. 
    more » « less
  2. Improving the performance and explanations of ML algorithms is a priority for adoption by humans in the real world. In critical domains such as healthcare, such technology has significant potential to reduce the burden on humans and considerably reduce manual assessments by providing quality assistance at scale. In today’s data-driven world, artificial intelligence (AI) systems are still experiencing issues with bias, explainability, and human-like reasoning and interpretability. Causal AI is the technique that can reason and make human-like choices making it possible to go beyond narrow Machine learning-based techniques and can be integrated into human decision-making. It also offers intrinsic explainability, new domain adaptability, bias free predictions, and works with datasets of all sizes. In this tutorial of type lecture style, we detail how a richer representation of causality in AI systems using a knowledge graph (KG) based approach is needed for intervention and counterfactual reasoning (Figure 1), how do we get to model-based and domain explainability, how causal representations helps in web and health care. 
    more » « less
  3. In the past decade, a number of sophisticated AI-powered systems and tools have been developed and released to the scientific community and the public. These technical developments have occurred against a backdrop of political and social upheaval that is both magnifying and magnified by public health and macroeconomic crises. These technical and socio-political changes offer multiple lenses to contextualize (or distort) scientific reflexivity. Further, to computational social scientists who study computer-mediated human behavior, they have implications on what we study and how we study it. How should the ICWSM community engage with this changing world? Which disruptions should we embrace, and which ones should we resist? Whom do we ally with, and for what purpose? In this workshop co-located with ICWSM, we invited experience-based perspectives on these questions with the intent of drafting a collective research agenda for the computational social science community. We did so via the facilitation of collaborative position papers and the discussion of imminent challenges we face in the context of, for example, proprietary large language models, an increasingly unwieldy peer review process, and growing issues in data collection and access. This document presents a summary of the contributions and discussions in the workshop. 
    more » « less
  4. Computer science faculty have a responsibility to teach students to recognize both the larger ethical issues and particular responsibilities that are part and parcel of their work as technologists. This is, however, a kind of teaching for which most of us have not been trained, and one which faculty and students approach with some trepidation. In this article we explore the use of science fiction as an effective tool to enable those teaching AI to engage students and practitioners about the scope and implications of current and future work in computer science. 
    more » « less
  5. Abstract

    Artificial intelligence (AI) methods have seen increasingly widespread use in everything from consumer products and driverless cars to fraud detection and weather forecasting. The use of AI has transformed many of these application domains. There are ongoing efforts at leveraging AI for disaster risk analysis. This article takes a critical look at the use of AI for disaster risk analysis. What is the potential? How is the use of AI in this field different from its use in nondisaster fields? What challenges need to be overcome for this potential to be realized? And, what are the potential pitfalls of an AI‐based approach for disaster risk analysis that we as a society must be cautious of?

     
    more » « less