Abstract. In the geosciences, recent attention has been paid to the influence of uncertainty on expert decision-making. When making decisions under conditions of uncertainty, people tend to employ heuristics (rules of thumb) based on experience, relying on their prior knowledge and beliefs to intuitively guide choice. Over 50 years of decision-making research in cognitive psychology demonstrates that heuristics can lead to less-than-optimal decisions, collectively referred to as biases. For example, the availability bias occurs when people make judgments based on what is most dominant or accessible in memory; geoscientists who have spent the past several months studying strike-slip faults will have this terrain most readily available in their mind when interpreting new seismic data. Given the important social and commercial implications of many geoscience decisions, there is a need to develop effective interventions for removing or mitigating decision bias. In this paper, we outline the key insights from decision-making research about how to reduce bias and review the literature on debiasing strategies. First, we define an optimal decision, since improving decision-making requires having a standard to work towards. Next, we discuss the cognitive mechanisms underlying decision biases and describe three biases that have been shown to influence geoscientists' decision-making (availability bias, framing bias, anchoring bias). Finally, we review existing debiasing strategies that have applicability in the geosciences, with special attention given to strategies that make use of information technology and artificial intelligence (AI). We present two case studies illustrating different applications of intelligent systems for the debiasing of geoscientific decision-making, wherein debiased decision-making is an emergent property of the coordinated and integrated processing of human–AI collaborative teams.
more »
« less
This content will become publicly available on May 23, 2026
Neurotechnology for enhancing human operation of robotic and semi-autonomous systems
Human operators of remote and semi-autonomous systems must have a high level of executive function to safely and efficiently conduct operations. These operators face unique cognitive challenges when monitoring and controlling robotic machines, such as vehicles, drones, and construction equipment. The development of safe and experienced human operators of remote machines requires structured training and credentialing programs. This review critically evaluates the potential for incorporating neurotechnology into remote systems operator training and work to enhance human-machine interactions, performance, and safety. Recent evidence demonstrating that different noninvasive neuromodulation and neurofeedback methods can improve critical executive functions such as attention, learning, memory, and cognitive control is reviewed. We further describe how these approaches can be used to improve training outcomes, as well as teleoperator vigilance and decision-making. We also describe how neuromodulation can help remote operators during complex or high-risk tasks by mitigating impulsive decision-making and cognitive errors. While our review advocates for incorporating neurotechnology into remote operator training programs, continued research is required to evaluate the how these approaches will impact industrial safety and workforce readiness.
more »
« less
- Award ID(s):
- 2220677
- PAR ID:
- 10644930
- Publisher / Repository:
- Frontiers in Robotics and AI
- Date Published:
- Journal Name:
- Frontiers in Robotics and AI
- Volume:
- 12
- ISSN:
- 2296-9144
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Human decision making is plagued by systematic errors that can have devastating consequences. Previous research has found that such errors can be partly prevented by teaching people decision strategies that would allow them to make better choices in specific situations. Three bottlenecks of this approach are our limited knowledge of effective decision strategies, the limited transfer of learning beyond the trained task, and the challenge of efficiently teaching good decision strategies to a large number of people. We introduce a general approach to solving these problems that leverages artificial intelligence to discover and teach optimal decision strategies. As a proof of concept, we developed an intelligent tutor that teaches people the automatically discovered optimal heuristic for environments where immediate rewards do not predict long-term outcomes. We found that practice with our intelligent tutor was more effective than conventional approaches to improving human decision making. The benefits of training with our cognitive tutor transferred to a more challenging task and were retained over time. Our general approach to improving human decision making by developing intelligent tutors also proved successful for another environment with a very different reward structure. These findings suggest that leveraging artificial intelligence to discover and teach optimal cognitive strategies is a promising approach to improving human judgment and decision making.more » « less
-
Humans possess an innate ability to incorporate tools into our body schema to perform a myriad of tasks not possible with our natural limbs. Human-in-the-loop telerobotic systems (HiLTS) are tools that extend human manipulation capabilities to remote and virtual environments. Unlike most hand-held tools, however, HiLTS often possess complex electromechanical architectures that introduce non-trivial transmission dynamics between the robot’s leader and follower, which alter or obfuscate the environment’s dynamics. While considerable research has focused on negating or circumventing these dynamics, it is not well understood how capable human operators are at incorporating these transmission dynamics into their sensorimotor control scheme. To begin answering this question, we recruited N=12 participants to use a novel reconfigurable teleoperator with varying transmission dynamics to perform a visuo-haptic tracking task. Contrary to our original hypothesis, our findings demonstrate that humans can account for substantial differences in teleoperator transmission dynamics and produce the compensatory strategies necessary to adequately control the teleoperator. These findings suggest that advances in transparency algorithms and haptic feedback approaches must be coupled with control designs that leverage the unique capabilities of the human operator in the loop.more » « less
-
Abstract Ethical considerations are the fabric of society, and they foster cooperation, help, and sacrifice for the greater good. Advances in AI create a greater need to examine ethical considerations involving the development and implementation of such systems. Integrating ethics into artificial intelligence-based programs is crucial for preventing negative outcomes, such as privacy breaches and biased decision making. Human–AI teaming (HAIT) presents additional challenges, as the ethical principles and moral theories that provide justification for them are not yet computable by machines. To that effect, models of human judgments and decision making, such as the agent-deed-consequence (ADC) model, will be crucial to inform the ethical guidance functions in AI team mates and to clarify how and why humans (dis)trust machines. The current paper will examine the ADC model as it is applied to the context of HAIT, and the challenges associated with the use of human-centric ethical considerations when applied to an AI context.more » « less
-
Abstract. In the geosciences, recent attention has been paid to the influence of uncertainty on expert decision making. When making decisions under conditions of uncertainty, people tend to employ heuristics (rules of thumb) based on experience, relying on their prior knowledge and beliefs to intuitively guide choice. Over 50 years of decision making research in cognitive psychology demonstrates that heuristics can lead to less-than-optimal decisions, collectively referred to as biases. For example, a geologist who confidently interprets ambiguous data as representative of a familiar category form their research (e.g., strike slip faults for expert in extensional domains) is exhibiting the availability bias, which occurs when people make judgments based on what is most dominant or accessible in memory. Given the important social and commercial implications of many geoscience decisions, there is a need to develop effective interventions for removing or mitigating decision bias. In this paper, we summarize the key insights from decision making research about how to reduce bias and review the literature on debiasing strategies. First, we define an optimal decision, since improving decision making requires having a standard to work towards. Next, we discuss the cognitive mechanisms underlying decision biases and describe three biases that have been shown to influence geoscientists decision making (availability bias, framing bias, anchoring bias). Finally, we review existing debiasing strategies that have applicability in the geosciences, with special attention given to those strategies that make use of information technology and artificial intelligence (AI). We present two case studies illustrating different applications of intelligent systems for the debiasing of geoscientific decision making, where debiased decision making is an emergent property of the coordinated and integrated processing of human-AI collaborative teams.more » « less
An official website of the United States government
