- PAR ID:
- 10176629
- Date Published:
- Journal Name:
- A Quarterly bulletin of the Computer Society of the IEEE Technical Committee on Data Engineering
- Volume:
- 42
- Issue:
- 3
- ISSN:
- 1053-1238
- Page Range / eLocation ID:
- 13-23
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
The increased integration of artificial intelligence (AI) technologies in human workflows has resulted in a new paradigm of AI-assisted decision making,in which an AI model provides decision recommendations while humans make the final decisions. To best support humans in decision making, it is critical to obtain a quantitative understanding of how humans interact with and rely on AI. Previous studies often model humans' reliance on AI as an analytical process, i.e., reliance decisions are made based on cost-benefit analysis. However, theoretical models in psychology suggest that the reliance decisions can often be driven by emotions like humans' trust in AI models. In this paper, we propose a hidden Markov model to capture the affective process underlying the human-AI interaction in AI-assisted decision making, by characterizing how decision makers adjust their trust in AI over time and make reliance decisions based on their trust. Evaluations on real human behavior data collected from human-subject experiments show that the proposed model outperforms various baselines in accurately predicting humans' reliance behavior in AI-assisted decision making. Based on the proposed model, we further provide insights into how humans' trust and reliance dynamics in AI-assisted decision making is influenced by contextual factors like decision stakes and their interaction experiences.more » « less
-
Abstract AI assistance is readily available to humans in a variety of decision-making applications. In order to fully understand the efficacy of such joint decision-making, it is important to first understand the human’s reliance on AI. However, there is a disconnect between how joint decision-making is studied and how it is practiced in the real world. More often than not, researchers ask humans to provide independent decisions before they are shown AI assistance. This is done to make explicit the influence of AI assistance on the human’s decision. We develop a cognitive model that allows us to infer the
latent reliance strategy of humans on AI assistance without asking the human to make an independent decision. We validate the model’s predictions through two behavioral experiments. The first experiment follows aconcurrent paradigm where humans are shown AI assistance alongside the decision problem. The second experiment follows asequential paradigm where humans provide an independent judgment on a decision problem before AI assistance is made available. The model’s predicted reliance strategies closely track the strategies employed by humans in the two experimental paradigms. Our model provides a principled way to infer reliance on AI-assistance and may be used to expand the scope of investigation on human-AI collaboration. -
Abstract Understanding why animals (including humans) choose one thing over another is one of the key questions underlying the fields of behavioural ecology, behavioural economics and psychology. Most traditional studies of food choice in animals focus on simple, single‐attribute decision tasks. However, animals in the wild are often faced with multi‐attribute choice tasks where options in the choice set vary across multiple dimensions. Multi‐attribute decision‐making is particularly relevant for flower‐visiting insects faced with deciding between flowers that may differ in reward attributes such as sugar concentration, nectar volume and pollen composition as well as non‐rewarding attributes such as colour, symmetry and odour. How do flower‐visiting insects deal with complex multi‐attribute decision tasks?
Here we review and synthesise research on the decision strategies used by flower‐visiting insects when making multi‐attribute decisions. In particular, we review how different types of foraging frameworks (classic optimal foraging theory, nutritional ecology, heuristics) conceptualise multi‐attribute choice and we discuss how phenomena such as innate preferences, flower constancy and context dependence influence our understanding of flower choice.
We find that multi‐attribute decision‐making is a complex process that can be influenced by innate preferences, flower constancy, the composition of the choice set and economic reward value. We argue that to understand and predict flower choice in flower‐visiting insects, we need to move beyond simplified choice sets towards a view of multi‐attribute choice which integrates the role of non‐rewarding attributes and which includes flower constancy, innate preferences and context dependence. We further caution that behavioural experiments need to consider the possibility of context dependence in the design and interpretation of preference experiments.
We conclude with a discussion of outstanding questions for future research. We also present a conceptual framework that incorporates the multiple dimensions of choice behaviour.
-
Human-AI collaboration is an increasingly commonplace part of decision-making in real world applications. However, how humans behave when collaborating with AI is not well understood. We develop metacognitive bandits, a computational model of a human's advice-seeking behavior when working with an AI. The model describes a person's metacognitive process of deciding when to rely on their own judgment and when to solicit the advice of the AI. It also accounts for the difficulty of each trial in making the decision to solicit advice. We illustrate that the metacognitive bandit makes decisions similar to humans in a behavioral experiment. We also demonstrate that algorithm aversion, a widely reported bias, can be explained as the result of a quasi-optimal sequential decision-making process. Our model does not need to assume any prior biases towards AI to produce this behavior.more » « less
-
Algorithmic case-based decision support provides examples to help human make sense of predicted labels and aid human in decision-making tasks. Despite the promising performance of supervised learning, representations learned by supervised models may not align well with human intuitions: what models consider as similar examples can be perceived as distinct by humans. As a result, they have limited effectiveness in case-based decision support. In this work, we incorporate ideas from metric learning with supervised learning to examine the importance of alignment for effective decision support. In addition to instance-level labels, we use human-provided triplet judgments to learn human-compatible decision-focused representations. Using both synthetic data and human subject experiments in multiple classification tasks, we demonstrate that such representation is better aligned with human perception than representation solely optimized for classification. Human-compatible representations identify nearest neighbors that are perceived as more similar by humans and allow humans to make more accurate predictions, leading to substantial improvements in human decision accuracies (17.8% in butterfly vs. moth classification and 13.2% in pneumonia classification).more » « less