skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, December 13 until 2:00 AM ET on Saturday, December 14 due to maintenance. We apologize for the inconvenience.


Title: Nutritional Labels for Data and Models
An essential ingredient of successful machine-assisted decision-making, particularly in high-stakes decisions, is interpretability –– allowing humans to understand, trust and, if necessary, contest, the computational process and its outcomes. These decision-making processes are typically complex: carried out in multiple steps, employing models with many hidden assumptions, and relying on datasets that are often used outside of the original context for which they were intended. In response, humans need to be able to determine the “fitness for use” of a given model or dataset, and to assess the methodology that was used to produce it. To address this need, we propose to develop interpretability and transparency tools based on the concept of a nutritional label, drawing an analogy to the food industry, where simple, standard labels convey information about the ingredients and production processes. Nutritional labels are derived automatically or semi-automatically as part of the complex process that gave rise to the data or model they describe, embodying the paradigm of interpretability-by-design. In this paper we further motivate nutritional labels, describe our instantiation of this paradigm for algorithmic rankers, and give a vision for developing nutritional labels that are appropriate for different contexts and stakeholders.  more » « less
Award ID(s):
1916647 1926250
PAR ID:
10176629
Author(s) / Creator(s):
;
Date Published:
Journal Name:
A Quarterly bulletin of the Computer Society of the IEEE Technical Committee on Data Engineering
Volume:
42
Issue:
3
ISSN:
1053-1238
Page Range / eLocation ID:
13-23
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The increased integration of artificial intelligence (AI) technologies in human workflows has resulted in a new paradigm of AI-assisted decision making,in which an AI model provides decision recommendations while humans make the final decisions. To best support humans in decision making, it is critical to obtain a quantitative understanding of how humans interact with and rely on AI. Previous studies often model humans' reliance on AI as an analytical process, i.e., reliance decisions are made based on cost-benefit analysis. However, theoretical models in psychology suggest that the reliance decisions can often be driven by emotions like humans' trust in AI models. In this paper, we propose a hidden Markov model to capture the affective process underlying the human-AI interaction in AI-assisted decision making, by characterizing how decision makers adjust their trust in AI over time and make reliance decisions based on their trust. Evaluations on real human behavior data collected from human-subject experiments show that the proposed model outperforms various baselines in accurately predicting humans' reliance behavior in AI-assisted decision making. Based on the proposed model, we further provide insights into how humans' trust and reliance dynamics in AI-assisted decision making is influenced by contextual factors like decision stakes and their interaction experiences. 
    more » « less
  2. Abstract

    AI assistance is readily available to humans in a variety of decision-making applications. In order to fully understand the efficacy of such joint decision-making, it is important to first understand the human’s reliance on AI. However, there is a disconnect between how joint decision-making is studied and how it is practiced in the real world. More often than not, researchers ask humans to provide independent decisions before they are shown AI assistance. This is done to make explicit the influence of AI assistance on the human’s decision. We develop a cognitive model that allows us to infer thelatentreliance strategy of humans on AI assistance without asking the human to make an independent decision. We validate the model’s predictions through two behavioral experiments. The first experiment follows aconcurrentparadigm where humans are shown AI assistance alongside the decision problem. The second experiment follows asequentialparadigm where humans provide an independent judgment on a decision problem before AI assistance is made available. The model’s predicted reliance strategies closely track the strategies employed by humans in the two experimental paradigms. Our model provides a principled way to infer reliance on AI-assistance and may be used to expand the scope of investigation on human-AI collaboration.

     
    more » « less
  3. Abstract

    Understanding why animals (including humans) choose one thing over another is one of the key questions underlying the fields of behavioural ecology, behavioural economics and psychology. Most traditional studies of food choice in animals focus on simple, single‐attribute decision tasks. However, animals in the wild are often faced with multi‐attribute choice tasks where options in the choice set vary across multiple dimensions. Multi‐attribute decision‐making is particularly relevant for flower‐visiting insects faced with deciding between flowers that may differ in reward attributes such as sugar concentration, nectar volume and pollen composition as well as non‐rewarding attributes such as colour, symmetry and odour. How do flower‐visiting insects deal with complex multi‐attribute decision tasks?

    Here we review and synthesise research on the decision strategies used by flower‐visiting insects when making multi‐attribute decisions. In particular, we review how different types of foraging frameworks (classic optimal foraging theory, nutritional ecology, heuristics) conceptualise multi‐attribute choice and we discuss how phenomena such as innate preferences, flower constancy and context dependence influence our understanding of flower choice.

    We find that multi‐attribute decision‐making is a complex process that can be influenced by innate preferences, flower constancy, the composition of the choice set and economic reward value. We argue that to understand and predict flower choice in flower‐visiting insects, we need to move beyond simplified choice sets towards a view of multi‐attribute choice which integrates the role of non‐rewarding attributes and which includes flower constancy, innate preferences and context dependence. We further caution that behavioural experiments need to consider the possibility of context dependence in the design and interpretation of preference experiments.

    We conclude with a discussion of outstanding questions for future research. We also present a conceptual framework that incorporates the multiple dimensions of choice behaviour.

     
    more » « less
  4. Human-AI collaboration is an increasingly commonplace part of decision-making in real world applications. However, how humans behave when collaborating with AI is not well understood. We develop metacognitive bandits, a computational model of a human's advice-seeking behavior when working with an AI. The model describes a person's metacognitive process of deciding when to rely on their own judgment and when to solicit the advice of the AI. It also accounts for the difficulty of each trial in making the decision to solicit advice. We illustrate that the metacognitive bandit makes decisions similar to humans in a behavioral experiment. We also demonstrate that algorithm aversion, a widely reported bias, can be explained as the result of a quasi-optimal sequential decision-making process. Our model does not need to assume any prior biases towards AI to produce this behavior. 
    more » « less
  5. Algorithmic case-based decision support provides examples to help human make sense of predicted labels and aid human in decision-making tasks. Despite the promising performance of supervised learning, representations learned by supervised models may not align well with human intuitions: what models consider as similar examples can be perceived as distinct by humans. As a result, they have limited effectiveness in case-based decision support. In this work, we incorporate ideas from metric learning with supervised learning to examine the importance of alignment for effective decision support. In addition to instance-level labels, we use human-provided triplet judgments to learn human-compatible decision-focused representations. Using both synthetic data and human subject experiments in multiple classification tasks, we demonstrate that such representation is better aligned with human perception than representation solely optimized for classification. Human-compatible representations identify nearest neighbors that are perceived as more similar by humans and allow humans to make more accurate predictions, leading to substantial improvements in human decision accuracies (17.8% in butterfly vs. moth classification and 13.2% in pneumonia classification). 
    more » « less