skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Measuring Temporal Awareness for Human-Aware AI
This research investigated human performance in response to task demands that may be used to convey information about the human to an artificial agent. We performed an experiment with a dynamic time-sharing task to investigate participants development of temporal awareness of the task event unfolding in time. Temporal awareness as an extension, or a special case, of situation awareness, may provide for useful measures of covert mental models applicable to numerous tasks and for input to human-aware AI agents. Temporal awareness measures may be used to classify human performance into the control modes in the contextual control model (COCOM): scrambled, opportunistic, tactical, and strategic. Twenty-one participants participated in a within subjects experiment with an abstract task of resetting four independent timers within their respective windows of opportunity. The results show that temporal measures of task performance are sensitive to changes in task disruptions and difficulty and therefore have promise for human-aware AI.  more » « less
Award ID(s):
2125362
PAR ID:
10506881
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Sage Journals
Date Published:
Journal Name:
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Volume:
67
Issue:
1
ISSN:
1071-1813
Page Range / eLocation ID:
1817 to 1823
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. With the proliferation of AI, there is a growing concern regarding individuals becoming overly reliant on AI, leading to a decrease in intrinsic skills and autonomy. Assistive AI frameworks, on the other hand, also have the potential to improve human learning and performance by providing personalized learning experiences and real-time feedback. To study these opposing viewpoints on the consequences of AI assistance, we conducted a behavioral experiment using a dynamic decision-making game to assess how AI assistance impacts user performance, skill transfer, and cognitive engagement in task execution. Participants were assigned to one of four conditions that featured AI assistance at different time-points during the task. Our results suggest that AI assistance can improve immediate task performance without inducing human skill degradation or carryover effects in human learning. This observation has important implications for AI assistive frameworks as it suggests that there are classes of tasks in which assistance can be provided without risking the autonomy of the user. We discuss the possible reasons for this set of effects and explore their implications for future research directives. 
    more » « less
  2. People form perceptions and interpretations of AI through external sources prior to their interaction with new technology. For example, shared anecdotes and media stories influence prior beliefs that may or may not accurately represent the true nature of AI systems. We hypothesize people's prior perceptions and beliefs will affect human-AI interactions and usage behaviors when using new applications. This paper presents a user experiment to explore the interplay between user's pre-existing beliefs about AI technology, individual differences, and previously established sources of cognitive bias from first impressions with an interactive AI application. We employed questionnaire measures as features to categorize users into profiles based on their prior beliefs and attitudes about technology. In addition, participants were assigned to one of two controlled conditions designed to evoke either positive or negative first impressions during an AI-assisted judgment task using an interactive application. The experiment and results provide empirical evidence that profiling users by surveying them on their prior beliefs and differences can be a beneficial approach for bias (and/or unanticipated usage) mitigation instead of seeking one-size-fits-all solutions. 
    more » « less
  3. The integration of robots, particularly drones, into future construction sites introduces new safety challenges requiring enhanced situational awareness (SA) among workers. To address these challenges, this study explores the effectiveness of an AI-driven assistant designed to inform workers about dynamic environmental changes via auditory and visual channels. A mixed-reality bricklaying experiment was developed, simulating worker-drone interactions across three interaction levels: coexistence, cooperation, and collaboration. One hundred five construction-background students participated in tasks with and without the AI assistant, during which their eye-tracking data, productivity, and subjective perceptions were collected. Results indicated that the AI assistant significantly expedited workers’ awareness of approaching drones but concurrently reduced bricklaying productivity. Although participants reported high perceived usefulness and low distraction by the AI assistant itself, findings revealed a trade-off: improved SA toward drones came at the cost of decreased task performance, likely due to increased attentional shifts toward drones. Furthermore, the effectiveness of the assistant varied depending on the interaction level with drones. This study highlights both the opportunities and challenges of applying AI-driven informational systems in future construction environments, offering critical insights for designing human-centered AI technologies that balance safety enhancement with productivity maintenance. 
    more » « less
  4. Learning new languages is a complex cognitive task involving both implicit and explicit processes. Batterink, Oudiette, Reber, and Paller (2014) report that participants with vs. without conscious awareness of a hidden semi-artificial language regularity showed no significant differences in behavioral measures of grammar learning, suggesting that implicit/explicit routes may be functionally equivalent. However, their operationalization of learning via median reaction times might not capture underlying differences in cognitive processes. In a conceptual replication, we compared rule-aware (n=14) and rule-unaware (n=21) participants via drift-diffusion modeling, which can quantify distinct subcomponents of evidence-accumulation processes (Ratcliff & Rouder, 1998). For both groups, grammar learning was manifested in non-decision parameters, suggesting anticipation of motor responses. For rule-aware participants only, learning also affected bias in evidence accumulation during word reading. These results suggest that implicit grammar learning may be manifested through low-level mechanisms whereas explicit grammar learning may involve more direct engagement with encoded target meanings. 
    more » « less
  5. The use of Artificial Intelligence (AI) decision support is increasing in high-stakes contexts, such as healthcare, defense, and finance. Uncertainty information may help users better leverage AI predictions, especially when combined with their domain knowledge. We conducted a human-subject experiment with an online sample to examine the effects of presenting uncertainty information with AI recommendations. The experimental stimuli and task, which included identifying plant and animal images, are from an existing image recognition deep learning model, a popular approach to AI. The uncertainty information was predicted probabilities for whether each label was the true label. This information was presented numerically and visually. In the study, we tested the effect of AI recommendations in a within-subject comparison and uncertainty information in a between-subject comparison. The results suggest that AI recommendations increased both participants’ accuracy and confidence. Further, providing uncertainty information significantly increased accuracy but not confidence, suggesting that it may be effective for reducing overconfidence. In this task, participants tended to have higher domain knowledge for animals than plants based on a self-reported measure of domain knowledge. Participants with more domain knowledge were appropriately less confident when uncertainty information was provided. This suggests that people use AI and uncertainty information differently, such as an expert versus second opinion, depending on their level of domain knowledge. These results suggest that if presented appropriately, uncertainty information can potentially decrease overconfidence that is induced by using AI recommendations. 
    more » « less