skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Humans utilize sensory evidence of others’ intended action to make online decisions
Abstract We often acquire sensory information from another person’s actions to make decisions on how to move, such as when walking through a crowded hallway. Past interactive decision-making research has focused on cognitive tasks that did not allow for sensory information exchange between humans prior to a decision. Here, we test the idea that humans accumulate sensory evidence of another person’s intended action to decide their own movement. In a competitive sensorimotor task, we show that humans exploit time to accumulate sensory evidence of another’s intended action and utilize this information to decide how to move. We captured this continuous interactive decision-making behaviour with a drift-diffusion model. Surprisingly, aligned with a ‘paralysis-by-analysis’ phenomenon, we found that humans often waited too long to accumulate sensory evidence and failed to make a decision. Understanding how humans engage in interactive and online decision-making has broad implications that spans sociology, athletics, interactive technology, and economics.  more » « less
Award ID(s):
2146888
PAR ID:
10367545
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Scientific Reports
Volume:
12
Issue:
1
ISSN:
2045-2322
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The increased integration of artificial intelligence (AI) technologies in human workflows has resulted in a new paradigm of AI-assisted decision making,in which an AI model provides decision recommendations while humans make the final decisions. To best support humans in decision making, it is critical to obtain a quantitative understanding of how humans interact with and rely on AI. Previous studies often model humans' reliance on AI as an analytical process, i.e., reliance decisions are made based on cost-benefit analysis. However, theoretical models in psychology suggest that the reliance decisions can often be driven by emotions like humans' trust in AI models. In this paper, we propose a hidden Markov model to capture the affective process underlying the human-AI interaction in AI-assisted decision making, by characterizing how decision makers adjust their trust in AI over time and make reliance decisions based on their trust. Evaluations on real human behavior data collected from human-subject experiments show that the proposed model outperforms various baselines in accurately predicting humans' reliance behavior in AI-assisted decision making. Based on the proposed model, we further provide insights into how humans' trust and reliance dynamics in AI-assisted decision making is influenced by contextual factors like decision stakes and their interaction experiences. 
    more » « less
  2. Abstract AI assistance is readily available to humans in a variety of decision-making applications. In order to fully understand the efficacy of such joint decision-making, it is important to first understand the human’s reliance on AI. However, there is a disconnect between how joint decision-making is studied and how it is practiced in the real world. More often than not, researchers ask humans to provide independent decisions before they are shown AI assistance. This is done to make explicit the influence of AI assistance on the human’s decision. We develop a cognitive model that allows us to infer thelatentreliance strategy of humans on AI assistance without asking the human to make an independent decision. We validate the model’s predictions through two behavioral experiments. The first experiment follows aconcurrentparadigm where humans are shown AI assistance alongside the decision problem. The second experiment follows asequentialparadigm where humans provide an independent judgment on a decision problem before AI assistance is made available. The model’s predicted reliance strategies closely track the strategies employed by humans in the two experimental paradigms. Our model provides a principled way to infer reliance on AI-assistance and may be used to expand the scope of investigation on human-AI collaboration. 
    more » « less
  3. The intrinsic uncertainty of sensory information (i.e., evidence) does not necessarily deter an observer from making a reliable decision. Indeed, uncertainty can be reduced by integrating (accumulating) incoming sensory evidence. It is widely thought that this accumulation is instantiated via recurrent rate-code neural networks. Yet, these networks do not fully explain important aspects of perceptual decision-making, such as a subject’s ability to retain accumulated evidence during temporal gaps in the sensory evidence. Here, we utilized computational models to show that cortical circuits can switch flexibly between “retention” and “integration” modes during perceptual decision-making. Further, we found that, depending on how the sensory evidence was readout, we could simulate “stepping” and “ramping” activity patterns, which may be analogous to those seen in different studies of decision-making in the primate parietal cortex. This finding may reconcile these previous empirical studies because it suggests these two activity patterns emerge from the same mechanism. 
    more » « less
  4. AI assistance in decision-making has become popular, yet people's inappropriate reliance on AI often leads to unsatisfactory human-AI collaboration performance. In this paper, through three pre-registered, randomized human subject experiments, we explore whether and how the provision of second opinions may affect decision-makers' behavior and performance in AI-assisted decision-making. We find that if both the AI model's decision recommendation and a second opinion are always presented together, decision-makers reduce their over-reliance on AI while increase their under-reliance on AI, regardless whether the second opinion is generated by a peer or another AI model. However, if decision-makers have the control to decide when to solicit a peer's second opinion, we find that their active solicitations of second opinions have the potential to mitigate over-reliance on AI without inducing increased under-reliance in some cases. We conclude by discussing the implications of our findings for promoting effective human-AI collaborations in decision-making. 
    more » « less
  5. To reshape energy systems towards renewable energy resources, decision makers need to decide today on how to make the transition. Energy scenarios are widely used to guide decision making in this context. While considerable effort has been put into developing energy scenarios, researchers have pointed out three requirements for energy scenarios that are not fulfilled satisfactorily yet: The development and evaluation of energy scenarios should (1) incorporate the concept of sustainability, (2) provide decision support in a transparent way and (3) be replicable for other researchers. To meet these requirements, we combine different methodological approaches: story-and-simulation (SAS) scenarios, multi-criteria decision-making (MCDM), information modeling and co-simulation. We show in this paper how the combination of these methods can lead to an integrated approach for sustainability evaluation of energy scenarios with automated information exchange. Our approach consists of a sustainability evaluation process (SEP) and an information model for modeling dependencies. The objectives are to guide decisions towards sustainable development of the energy sector and to make the scenario and decision support processes more transparent for both decision makers and researchers. 
    more » « less