skip to main content


Title: Impacts of Behavioral Biases on Active Learning Strategies
Cyber-Physical-Human Systems (CPHS) interconnect humans, physical plants and cyber infrastructure across space and time. Industrial processes, electromechanical systems operations and medical diagnosis are some examples where one can see the intersection of humans, physical and cyber components. Emergence of Artificial Intelligence (AI) based computational models, controllers and decision support engines have improved the efficiency and cost effectiveness of such systems and processes. These CPHS typically involve a collaborative decision environment, comprising of AI-based models and human experts. Active Learning (AL) is a category of AI algorithms which aims to learn an efficient decision model by combining domain expertise of the human expert and computational capabilities of the AI model. Given the indispensable role of humans and lack of understanding about human behavior in collaborative decision environments, modeling and prediction of behavioral biases is a critical need. This paper, for the first time, introduces different behavioral biases within an AL context and investigates their impacts on the performance of AL strategies. The modelling of behavioral biases is demonstrated using experiments conducted on a real-world pancreatic cancer dataset. It is observed that classification accuracy of the decision model reduces by at least 20% in case of all the behavioral biases.  more » « less
Award ID(s):
2032751 1842670 2129352
NSF-PAR ID:
10335203
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2022 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)
Page Range / eLocation ID:
256 to 261
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. A prerequisite for social coordination is bidirectional communication between teammates, each playing two roles simultaneously: as receptive listeners and expressive speakers. For robots working with humans in complex situations with multiple goals that differ in importance, failure to fulfill the expectation of either role could undermine group performance due to misalignment of values between humans and robots. Specifically, a robot needs to serve as an effective listener to infer human users’ intents from instructions and feedback and as an expressive speaker to explain its decision processes to users. Here, we investigate how to foster effective bidirectional human-robot communications in the context of value alignment—collaborative robots and users form an aligned understanding of the importance of possible task goals. We propose an explainable artificial intelligence (XAI) system in which a group of robots predicts users’ values by taking in situ feedback into consideration while communicating their decision processes to users through explanations. To learn from human feedback, our XAI system integrates a cooperative communication model for inferring human values associated with multiple desirable goals. To be interpretable to humans, the system simulates human mental dynamics and predicts optimal explanations using graphical models. We conducted psychological experiments to examine the core components of the proposed computational framework. Our results show that real-time human-robot mutual understanding in complex cooperative tasks is achievable with a learning model based on bidirectional communication. We believe that this interaction framework can shed light on bidirectional value alignment in communicative XAI systems and, more broadly, in future human-machine teaming systems. 
    more » « less
  2. Nudging is a behavioral strategy aimed at influencing people’s thoughts and actions. Nudging techniques can be found in many situations in our daily lives, and these nudging techniques can targeted at human fast and unconscious thinking, e.g., by using images to generate fear or the more careful and effortful slow thinking, e.g., by releasing information that makes us reflect on our choices. In this paper, we propose and discuss a value-based AI-human collaborative framework where AI systems nudge humans by proposing decision recommendations. Three different nudging modalities, based on when recommendations are presented to the human, are intended to stimulate human fast thinking, slow thinking, or meta-cognition. Values that are relevant to a specific decision scenario are used to decide when and how to use each of these nudging modalities. Examples of values are decision quality, speed, human upskilling and learning, human agency, and privacy. Several values can be present at the same time, and their priorities can vary over time. The framework treats values as parameters to be instantiated in a specific decision environment. 
    more » « less
  3. Human-AI collaboration is an increasingly commonplace part of decision-making in real world applications. However, how humans behave when collaborating with AI is not well understood. We develop metacognitive bandits, a computational model of a human's advice-seeking behavior when working with an AI. The model describes a person's metacognitive process of deciding when to rely on their own judgment and when to solicit the advice of the AI. It also accounts for the difficulty of each trial in making the decision to solicit advice. We illustrate that the metacognitive bandit makes decisions similar to humans in a behavioral experiment. We also demonstrate that algorithm aversion, a widely reported bias, can be explained as the result of a quasi-optimal sequential decision-making process. Our model does not need to assume any prior biases towards AI to produce this behavior. 
    more » « less
  4. While EXplainable Artificial Intelligence (XAI) approaches aim to improve human-AI collaborative decision-making by improving model transparency and mental model formations, experiential factors associated with human users can cause challenges in ways system designers do not anticipate. In this paper, we first showcase a user study on how anchoring bias can potentially affect mental model formations when users initially interact with an intelligent system and the role of explanations in addressing this bias. Using a video activity recognition tool in cooking domain, we asked participants to verify whether a set of kitchen policies are being followed, with each policy focusing on a weakness or a strength. We controlled the order of the policies and the presence of explanations to test our hypotheses. Our main finding shows that those who observed system strengths early-on were more prone to automation bias and made significantly more errors due to positive first impressions of the system, while they built a more accurate mental model of the system competencies. On the other hand, those who encountered weaknesses earlier made significantly fewer errors since they tended to rely more on themselves, while they also underestimated model competencies due to having a more negative first impression of the model. Motivated by these findings and similar existing work, we formalize and present a conceptual model of user’s past experiences that examine the relations between user’s backgrounds, experiences, and human factors in XAI systems based on usage time. Our work presents strong findings and implications, aiming to raise the awareness of AI designers towards biases associated with user impressions and backgrounds. 
    more » « less
  5. One of the early goals of artificial intelligence (AI) was to create algorithms that exhibited behavior indistinguishable from human behavior (i.e., human-like behavior). Today, AI has diverged, often aiming to excel in tasks inspired by human capabilities and outperform humans, rather than replicating human cogntion and action. In this paper, I explore the overarching question of whether computational algorithms have achieved this initial goal of AI. I focus on dynamic decision-making, approaching the question from the perspective of computational cognitive science. I present a general cognitive algorithm that intends to emulate human decision-making in dynamic environments, as defined in instance-based learning theory (IBLT). I use the cognitive steps proposed in IBLT to organize and discuss current evidence that supports some of the human-likeness of the decision-making mechanisms. I also highlight the significant gaps in research that are required to improve current models and to create higher fidelity in computational algorithms to represent human decision processes. I conclude with concrete steps toward advancing the construction of algorithms that exhibit human-like behavior with the ultimate goal of supporting human dynamic decision-making.

     
    more » « less