skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Simulation of Spatial Memory for Human Navigation Based on Visual Attention in Floorplan Review
Human navigation simulation is critical to many civil engineering tasks and is of central interest to the simulation community. Most human navigation simulation approaches focus on the classic psychology evidence, or assumptions that still need further proofs. The overly simplified and generalized assumption of navigation behaviors does not highlight the need of capturing individual differences in spatial cognition and navigation decision-making, or the impacts of diverse ways of spatial information display. This study proposes the visual attention patterns in floorplan review to be a stronger predictor of human navigation behaviors. To set the theoretical foundation, a Virtual Reality (VR) experiment was performed to test if visual attention patterns during spatial information review can predict the quality of spatial memory, and how the relationship is affected by the diverse ways of information display, including 2D, 3D and VR. The results set a basis for future prediction model developments.  more » « less
Award ID(s):
1937878
PAR ID:
10152100
Author(s) / Creator(s):
;
Date Published:
Journal Name:
IEEE Winter Simulation Conference 2019
Page Range / eLocation ID:
3031 to 3040
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Human navigation simulation is critical to many civil engineering tasks and is of central interest to the simulation community. Most human navigation simulation approaches focus on the classic psychology evidence, or assumptions that still need further proofs. The overly simplified and generalized assumption of navigation behaviors does not highlight the need of capturing individual differences in spatial cognition and navigation decision-making, or the impacts of diverse ways of spatial information display. This study proposes the visual attention patterns in floorplan review to be a stronger predictor of human navigation behaviors. To set the theoretical foundation, a Virtual Reality (VR) experiment was performed to test if visual attention patterns during spatial information review can predict the quality of spatial memory, and how the relationship is affected by the diverse ways of information display, including 2D, 3D and VR. The results set a basis for future prediction model developments. 
    more » « less
  2. In this paper, we present results from an exploratory study to investigate users’ behaviors and preferences for three different styles of search results presentation in a virtual reality (VR) head-mounted display (HMD). Prior work in 2D displays has suggested possible benefits of presenting information in ways that exploit users’ spatial cognition abilities. We designed a VR system that displays search results in three different spatial arrangements: a list of 8 results, a 4x5 grid, and a 2x10 arc. These spatial display conditions were designed to differ in terms of the number of results displayed per page (8 vs 20) and the amount of head movement required to scan the results (list < grid < arc). Thirty-six participants completed 6 search trials in each display condition (18 total). For each trial, the participant was presented with a display of search results and asked to find a given target result or to indicate that the target was not present. We collected data about users’ behaviors with and perceptions about the three display conditions using interaction data, questionnaires, and interviews. We explore the effects of display condition and target presence on behavioral measures (e.g., completion time, head movement, paging events, accuracy) and on users’ perceptions (e.g., workload, ease of use, comfort, confidence, difficulty, and lostness). Our results suggest that there was no difference in accuracy among the display conditions, but that users completed tasks more quickly using the arc. However, users also expressed lower preferences for the arc, instead preferring the list and grid displays. Our findings extend prior research on visual search into to the area of 3-dimensional result displays for interactive information retrieval in VR HMD environments. 
    more » « less
  3. Virtual reality (VR) simulations have been adopted to provide controllable environments for running augmented reality (AR) experiments in diverse scenarios. However, insufficient research has explored the impact of AR applications on users, especially their attention patterns, and whether VR simulations accurately replicate these effects. In this work, we propose to analyze user attention patterns via eye tracking during XR usage. To represent applications that provide both helpful guidance and irrelevant information, we built a Sudoku Helper app that includes visual hints and potential distractions during the puzzle-solving period. We conducted two user studies with 19 different users each in AR and VR, in which we collected eye tracking data, conducted gaze-based analysis, and trained machine learning (ML) models to predict user attentional states and attention control ability. Our results show that the AR app had a statistically significant impact on enhancing attention by increasing the fixated proportion of time, while the VR app reduced fixated time and made the users less focused. Results indicate that there is a discrepancy between VR simulations and the AR experience. Our ML models achieve 99.3% and 96.3% accuracy in predicting user attention control ability in AR and VR, respectively. A noticeable performance drop when transferring models trained on one medium to the other further highlights the gap between the AR experience and the VR simulation of it. 
    more » « less
  4. null (Ed.)
    When navigating via car, developing robust mental representations (spatial knowledge) of the environment is crucial in situations where technology fails, or we need to find locations not included in a navigation system’s database. In this work, we present a study that examines how screen-relative and world-relative augmented reality (AR) head-up display interfaces affect drivers’ glance behavior and spatial knowledge acquisition. Results showed that both AR interfaces have similar impact on the levels of spatial knowledge acquired. However, eye-tracking analyses showed fundamental differences in the way participants visually interacted with different AR interfaces; with conformal-graphics demanding more visual attention from drivers. 
    more » « less
  5. null (Ed.)
    Objective We controlled participants’ glance behavior while using head-down displays (HDDs) and head-up displays (HUDs) to isolate driving behavioral changes due to use of different display types across different driving environments. Background Recently, HUD technology has been incorporated into vehicles, allowing drivers to, in theory, gather display information without moving their eyes away from the road. Previous studies comparing the impact of HUDs with traditional displays on human performance show differences in both drivers’ visual attention and driving performance. Yet no studies have isolated glance from driving behaviors, which limits our ability to understand the cause of these differences and resulting impact on display design. Method We developed a novel method to control visual attention in a driving simulator. Twenty experienced drivers sustained visual attention to in-vehicle HDDs and HUDs while driving in both a simple straight and empty roadway environment and a more realistic driving environment that included traffic and turns. Results In the realistic environment, but not the simpler environment, we found evidence of differing driving behaviors between display conditions, even though participants’ glance behavior was similar. Conclusion Thus, the assumption that visual attention can be evaluated in the same way for different types of vehicle displays may be inaccurate. Differences between driving environments bring the validity of testing HUDs using simplistic driving environments into question. Application As we move toward the integration of HUD user interfaces into vehicles, it is important that we develop new, sensitive assessment methods to ensure HUD interfaces are indeed safe for driving. 
    more » « less