skip to main content


Title: Simulation of Spatial Memory for Human Navigation Based on Visual Attention in Floorplan Review
Human navigation simulation is critical to many civil engineering tasks and is of central interest to the simulation community. Most human navigation simulation approaches focus on the classic psychology evidence, or assumptions that still need further proofs. The overly simplified and generalized assumption of navigation behaviors does not highlight the need of capturing individual differences in spatial cognition and navigation decision-making, or the impacts of diverse ways of spatial information display. This study proposes the visual attention patterns in floorplan review to be a stronger predictor of human navigation behaviors. To set the theoretical foundation, a Virtual Reality (VR) experiment was performed to test if visual attention patterns during spatial information review can predict the quality of spatial memory, and how the relationship is affected by the diverse ways of information display, including 2D, 3D and VR. The results set a basis for future prediction model developments.  more » « less
Award ID(s):
1937878
NSF-PAR ID:
10152100
Author(s) / Creator(s):
;
Date Published:
Journal Name:
IEEE Winter Simulation Conference 2019
Page Range / eLocation ID:
3031 to 3040
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Human navigation simulation is critical to many civil engineering tasks and is of central interest to the simulation community. Most human navigation simulation approaches focus on the classic psychology evidence, or assumptions that still need further proofs. The overly simplified and generalized assumption of navigation behaviors does not highlight the need of capturing individual differences in spatial cognition and navigation decision-making, or the impacts of diverse ways of spatial information display. This study proposes the visual attention patterns in floorplan review to be a stronger predictor of human navigation behaviors. To set the theoretical foundation, a Virtual Reality (VR) experiment was performed to test if visual attention patterns during spatial information review can predict the quality of spatial memory, and how the relationship is affected by the diverse ways of information display, including 2D, 3D and VR. The results set a basis for future prediction model developments. 
    more » « less
  2. In this paper, we present results from an exploratory study to investigate users’ behaviors and preferences for three different styles of search results presentation in a virtual reality (VR) head-mounted display (HMD). Prior work in 2D displays has suggested possible benefits of presenting information in ways that exploit users’ spatial cognition abilities. We designed a VR system that displays search results in three different spatial arrangements: a list of 8 results, a 4x5 grid, and a 2x10 arc. These spatial display conditions were designed to differ in terms of the number of results displayed per page (8 vs 20) and the amount of head movement required to scan the results (list < grid < arc). Thirty-six participants completed 6 search trials in each display condition (18 total). For each trial, the participant was presented with a display of search results and asked to find a given target result or to indicate that the target was not present. We collected data about users’ behaviors with and perceptions about the three display conditions using interaction data, questionnaires, and interviews. We explore the effects of display condition and target presence on behavioral measures (e.g., completion time, head movement, paging events, accuracy) and on users’ perceptions (e.g., workload, ease of use, comfort, confidence, difficulty, and lostness). Our results suggest that there was no difference in accuracy among the display conditions, but that users completed tasks more quickly using the arc. However, users also expressed lower preferences for the arc, instead preferring the list and grid displays. Our findings extend prior research on visual search into to the area of 3-dimensional result displays for interactive information retrieval in VR HMD environments. 
    more » « less
  3. Background: Drivers gather most of the information they need to drive by looking at the world around them and at visual displays within the vehicle. Navigation systems automate the way drivers navigate. In using these systems, drivers offload both tactical (route following) and strategic aspects (route planning) of navigational tasks to the automated SatNav system, freeing up cognitive and attentional resources that can be used in other tasks (Burnett, 2009). Despite the potential benefits and opportunities that navigation systems provide, their use can also be problematic. For example, research suggests that drivers using SatNav do not develop as much environmental spatial knowledge as drivers using paper maps (Waters & Winter, 2011; Parush, Ahuvia, & Erev, 2007). With recent growth and advances of augmented reality (AR) head-up displays (HUDs), there are new opportunities to display navigation information directly within a driver’s forward field of view, allowing them to gather information needed to navigate without looking away from the road. While the technology is promising, the nuances of interface design and its impacts on drivers must be further understood before AR can be widely and safely incorporated into vehicles. Specifically, an impact that warrants investigation is the role of AR HUDS in spatial knowledge acquisition while driving. Acquiring high levels of spatial knowledge is crucial for navigation tasks because individuals who have greater levels of spatial knowledge acquisition are more capable of navigating based on their own internal knowledge (Bolton, Burnett, & Large, 2015). Moreover, the ability to develop an accurate and comprehensive cognitive map acts as a social function in which individuals are able to navigate for others, provide verbal directions and sketch direction maps (Hill, 1987). Given these points, the relationship between spatial knowledge acquisition and novel technologies such as AR HUDs in driving is a relevant topic for investigation. Objectives: This work explored whether providing conformal AR navigational cues improves spatial knowledge acquisition (as compared to traditional HUD visual cues) to assess the plausibility and justification for investment in generating larger FOV AR HUDs with potentially multiple focal planes. Methods: This study employed a 2x2 between-subjects design in which twenty-four participants were counterbalanced by gender. We used a fixed base, medium fidelity driving simulator for where participants drove while navigating with one of two possible HUD interface designs: a world-relative arrow post sign and a screen-relative traditional arrow. During the 10-15 minute drive, participants drove the route and were encouraged to verbally share feedback as they proceeded. After the drive, participants completed a NASA-TLX questionnaire to record their perceived workload. We measured spatial knowledge at two levels: landmark and route knowledge. Landmark knowledge was assessed using an iconic recognition task, while route knowledge was assessed using a scene ordering task. After completion of the study, individuals signed a post-trial consent form and were compensated $10 for their time. Results: NASA-TLX performance subscale ratings revealed that participants felt that they performed better during the world-relative condition but at a higher rate of perceived workload. However, in terms of perceived workload, results suggest there is no significant difference between interface design conditions. Landmark knowledge results suggest that the mean number of remembered scenes among both conditions is statistically similar, indicating participants using both interface designs remembered the same proportion of on-route scenes. Deviance analysis show that only maneuver direction had an influence on landmark knowledge testing performance. Route knowledge results suggest that the proportion of scenes on-route which were correctly sequenced by participants is similar under both conditions. Finally, participants exhibited poorer performance in the route knowledge task as compared to landmark knowledge task (independent of HUD interface design). Conclusions: This study described a driving simulator study which evaluated the head-up provision of two types of AR navigation interface designs. The world-relative condition placed an artificial post sign at the corner of an approaching intersection containing a real landmark. The screen-relative condition displayed turn directions using a screen-fixed traditional arrow located directly ahead of the participant on the right or left side on the HUD. Overall results of this initial study provide evidence that the use of both screen-relative and world-relative AR head-up display interfaces have similar impact on spatial knowledge acquisition and perceived workload while driving. These results contrast a common perspective in the AR community that conformal, world-relative graphics are inherently more effective. This study instead suggests that simple, screen-fixed designs may indeed be effective in certain contexts. 
    more » « less
  4. Gonzalez, D. (Ed.)

    Today’s research on human-robot teaming requires the ability to test artificial intelligence (AI) algorithms for perception and decision-making in complex real-world environments. Field experiments, also referred to as experiments “in the wild,” do not provide the level of detailed ground truth necessary for thorough performance comparisons and validation. Experiments on pre-recorded real-world data sets are also significantly limited in their usefulness because they do not allow researchers to test the effectiveness of active robot perception and control or decision strategies in the loop. Additionally, research on large human-robot teams requires tests and experiments that are too costly even for the industry and may result in considerable time losses when experiments go awry. The novel Real-Time Human Autonomous Systems Collaborations (RealTHASC) facility at Cornell University interfaces real and virtual robots and humans with photorealistic simulated environments by implementing new concepts for the seamless integration of wearable sensors, motion capture, physics-based simulations, robot hardware and virtual reality (VR). The result is an extended reality (XR) testbed by which real robots and humans in the laboratory are able to experience virtual worlds, inclusive of virtual agents, through real-time visual feedback and interaction. VR body tracking by DeepMotion is employed in conjunction with the OptiTrack motion capture system to transfer every human subject and robot in the real physical laboratory space into a synthetic virtual environment, thereby constructing corresponding human/robot avatars that not only mimic the behaviors of the real agents but also experience the virtual world through virtual sensors and transmit the sensor data back to the real human/robot agent, all in real time. New cross-domain synthetic environments are created in RealTHASC using Unreal Engine™, bridging the simulation-to-reality gap and allowing for the inclusion of underwater/ground/aerial autonomous vehicles, each equipped with a multi-modal sensor suite. The experimental capabilities offered by RealTHASC are demonstrated through three case studies showcasing mixed real/virtual human/robot interactions in diverse domains, leveraging and complementing the benefits of experimentation in simulation and in the real world.

     
    more » « less
  5. Spatial perspective taking is an essential cognitive ability that enables people to imagine how an object or scene would appear from a perspective different from their current physical viewpoint. This process is fundamental for successful navigation, especially when people utilize navigational aids (e.g., maps) and the information provided is shown from a different perspective. Research on spatial perspective taking is primarily conducted using paper-pencil tasks or computerized figural tasks. However, in daily life, navigation takes place in a three-dimensional (3D) space and involves movement of human bodies through space, and people need to map the perspective indicated by a 2D, top down, external representation to their current 3D surroundings to guide their movements to goal locations. In this study, we developed an immersive viewpoint transformation task (iVTT) using ambulatory virtual reality (VR) technology. In the iVTT, people physically walked to a goal location in a virtual environment, using a first-person perspective, after viewing a map of the same environment from a top-down perspective. Comparing this task with a computerized version of a popular paper-and-pencil perspective taking task (SOT: Spatial Orientation Task), the results indicated that the SOT is highly correlated with angle production error but not distance error in the iVTT. Overall angular error in the iVTT was higher than in the SOT. People utilized intrinsic body axes (front/back axis or left/right axis) similarly in the SOT and the iVTT, although there were some minor differences. These results suggest that the SOT and the iVTT capture common variance and cognitive processes, but are also subject to unique sources of error caused by different cognitive processes. The iVTT provides a new immersive VR paradigm to study perspective taking ability in a space encompassing human bodies, and advances our understanding of perspective taking in the real world. 
    more » « less