Abstract AI assistance is readily available to humans in a variety of decision-making applications. In order to fully understand the efficacy of such joint decision-making, it is important to first understand the human’s reliance on AI. However, there is a disconnect between how joint decision-making is studied and how it is practiced in the real world. More often than not, researchers ask humans to provide independent decisions before they are shown AI assistance. This is done to make explicit the influence of AI assistance on the human’s decision. We develop a cognitive model that allows us to infer thelatentreliance strategy of humans on AI assistance without asking the human to make an independent decision. We validate the model’s predictions through two behavioral experiments. The first experiment follows aconcurrentparadigm where humans are shown AI assistance alongside the decision problem. The second experiment follows asequentialparadigm where humans provide an independent judgment on a decision problem before AI assistance is made available. The model’s predicted reliance strategies closely track the strategies employed by humans in the two experimental paradigms. Our model provides a principled way to infer reliance on AI-assistance and may be used to expand the scope of investigation on human-AI collaboration.
more »
« less
Nutritional Labels for Data and Models
An essential ingredient of successful machine-assisted decision-making, particularly in high-stakes decisions, is interpretability –– allowing humans to understand, trust and, if necessary, contest, the computational process and its outcomes. These decision-making processes are typically complex: carried out in multiple steps, employing models with many hidden assumptions, and relying on datasets that are often used outside of the original context for which they were intended. In response, humans need to be able to determine the “fitness for use” of a given model or dataset, and to assess the methodology that was used to produce it. To address this need, we propose to develop interpretability and transparency tools based on the concept of a nutritional label, drawing an analogy to the food industry, where simple, standard labels convey information about the ingredients and production processes. Nutritional labels are derived automatically or semi-automatically as part of the complex process that gave rise to the data or model they describe, embodying the paradigm of interpretability-by-design. In this paper we further motivate nutritional labels, describe our instantiation of this paradigm for algorithmic rankers, and give a vision for developing nutritional labels that are appropriate for different contexts and stakeholders.
more »
« less
- PAR ID:
- 10176629
- Date Published:
- Journal Name:
- A Quarterly bulletin of the Computer Society of the IEEE Technical Committee on Data Engineering
- Volume:
- 42
- Issue:
- 3
- ISSN:
- 1053-1238
- Page Range / eLocation ID:
- 13-23
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The increased integration of artificial intelligence (AI) technologies in human workflows has resulted in a new paradigm of AI-assisted decision making,in which an AI model provides decision recommendations while humans make the final decisions. To best support humans in decision making, it is critical to obtain a quantitative understanding of how humans interact with and rely on AI. Previous studies often model humans' reliance on AI as an analytical process, i.e., reliance decisions are made based on cost-benefit analysis. However, theoretical models in psychology suggest that the reliance decisions can often be driven by emotions like humans' trust in AI models. In this paper, we propose a hidden Markov model to capture the affective process underlying the human-AI interaction in AI-assisted decision making, by characterizing how decision makers adjust their trust in AI over time and make reliance decisions based on their trust. Evaluations on real human behavior data collected from human-subject experiments show that the proposed model outperforms various baselines in accurately predicting humans' reliance behavior in AI-assisted decision making. Based on the proposed model, we further provide insights into how humans' trust and reliance dynamics in AI-assisted decision making is influenced by contextual factors like decision stakes and their interaction experiences.more » « less
-
Abstract Understanding why animals (including humans) choose one thing over another is one of the key questions underlying the fields of behavioural ecology, behavioural economics and psychology. Most traditional studies of food choice in animals focus on simple, single‐attribute decision tasks. However, animals in the wild are often faced with multi‐attribute choice tasks where options in the choice set vary across multiple dimensions. Multi‐attribute decision‐making is particularly relevant for flower‐visiting insects faced with deciding between flowers that may differ in reward attributes such as sugar concentration, nectar volume and pollen composition as well as non‐rewarding attributes such as colour, symmetry and odour. How do flower‐visiting insects deal with complex multi‐attribute decision tasks?Here we review and synthesise research on the decision strategies used by flower‐visiting insects when making multi‐attribute decisions. In particular, we review how different types of foraging frameworks (classic optimal foraging theory, nutritional ecology, heuristics) conceptualise multi‐attribute choice and we discuss how phenomena such as innate preferences, flower constancy and context dependence influence our understanding of flower choice.We find that multi‐attribute decision‐making is a complex process that can be influenced by innate preferences, flower constancy, the composition of the choice set and economic reward value. We argue that to understand and predict flower choice in flower‐visiting insects, we need to move beyond simplified choice sets towards a view of multi‐attribute choice which integrates the role of non‐rewarding attributes and which includes flower constancy, innate preferences and context dependence. We further caution that behavioural experiments need to consider the possibility of context dependence in the design and interpretation of preference experiments.We conclude with a discussion of outstanding questions for future research. We also present a conceptual framework that incorporates the multiple dimensions of choice behaviour.more » « less
-
For predictive models to provide reliable guidance in decision making processes, they are often required to be accurate and robust to distribution shifts. Shortcut learning–where a model relies on spurious correlations or shortcuts to predict the target label–undermines the robustness property, leading to models with poor out-of-distribution accuracy despite good in-distribution performance. Existing work on shortcut learning either assumes that the set of possible shortcuts is known a priori or is discoverable using interpretability methods such as saliency maps, which might not always be true. Instead, we propose a two step approach to (1) efficiently identify relevant shortcuts, and (2) leverage the identified shortcuts to build models that are robust to distribution shifts. Our approach relies on having access to a (possibly) high dimensional set of auxiliary labels at training time, some of which correspond to possible shortcuts. We show both theoretically and empirically that our approach is able to identify a sufficient set of shortcuts leading to more efficient predictors in finite samples.more » « less
-
Human-AI collaboration is an increasingly commonplace part of decision-making in real world applications. However, how humans behave when collaborating with AI is not well understood. We develop metacognitive bandits, a computational model of a human's advice-seeking behavior when working with an AI. The model describes a person's metacognitive process of deciding when to rely on their own judgment and when to solicit the advice of the AI. It also accounts for the difficulty of each trial in making the decision to solicit advice. We illustrate that the metacognitive bandit makes decisions similar to humans in a behavioral experiment. We also demonstrate that algorithm aversion, a widely reported bias, can be explained as the result of a quasi-optimal sequential decision-making process. Our model does not need to assume any prior biases towards AI to produce this behavior.more » « less
An official website of the United States government

