skip to main content


Title: “I Am Here”: Investigating the Relationship Between Sense ofDirection and Communication of Spatial Information
Navigation is critical for everyday tasks but is especially important for urban search and rescue (USAR) contexts. Aside from successful navigation, individuals must also be able to effectively communicate spatial information. This study investigates how differences in spatial ability affected overall performance in a USAR task in a simulated Minecraft environment and the effectiveness of an individual’s ability to communicate their location verbally. Randomly selected participants were asked to rescue as many victims as possible in three 10-minute missions. Results showed that sense of direction may not predict the ability to communicate spatial information, and that the skill of processing spatial information may be distinct from the ability to communicate spatial information to others. We discuss the implications of these findings for teaming contexts that involve both processes.  more » « less
Award ID(s):
1828010
NSF-PAR ID:
10432736
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Volume:
66
Issue:
1
ISSN:
2169-5067
Page Range / eLocation ID:
2021 to 2025
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Background: Drivers gather most of the information they need to drive by looking at the world around them and at visual displays within the vehicle. Navigation systems automate the way drivers navigate. In using these systems, drivers offload both tactical (route following) and strategic aspects (route planning) of navigational tasks to the automated SatNav system, freeing up cognitive and attentional resources that can be used in other tasks (Burnett, 2009). Despite the potential benefits and opportunities that navigation systems provide, their use can also be problematic. For example, research suggests that drivers using SatNav do not develop as much environmental spatial knowledge as drivers using paper maps (Waters & Winter, 2011; Parush, Ahuvia, & Erev, 2007). With recent growth and advances of augmented reality (AR) head-up displays (HUDs), there are new opportunities to display navigation information directly within a driver’s forward field of view, allowing them to gather information needed to navigate without looking away from the road. While the technology is promising, the nuances of interface design and its impacts on drivers must be further understood before AR can be widely and safely incorporated into vehicles. Specifically, an impact that warrants investigation is the role of AR HUDS in spatial knowledge acquisition while driving. Acquiring high levels of spatial knowledge is crucial for navigation tasks because individuals who have greater levels of spatial knowledge acquisition are more capable of navigating based on their own internal knowledge (Bolton, Burnett, & Large, 2015). Moreover, the ability to develop an accurate and comprehensive cognitive map acts as a social function in which individuals are able to navigate for others, provide verbal directions and sketch direction maps (Hill, 1987). Given these points, the relationship between spatial knowledge acquisition and novel technologies such as AR HUDs in driving is a relevant topic for investigation. Objectives: This work explored whether providing conformal AR navigational cues improves spatial knowledge acquisition (as compared to traditional HUD visual cues) to assess the plausibility and justification for investment in generating larger FOV AR HUDs with potentially multiple focal planes. Methods: This study employed a 2x2 between-subjects design in which twenty-four participants were counterbalanced by gender. We used a fixed base, medium fidelity driving simulator for where participants drove while navigating with one of two possible HUD interface designs: a world-relative arrow post sign and a screen-relative traditional arrow. During the 10-15 minute drive, participants drove the route and were encouraged to verbally share feedback as they proceeded. After the drive, participants completed a NASA-TLX questionnaire to record their perceived workload. We measured spatial knowledge at two levels: landmark and route knowledge. Landmark knowledge was assessed using an iconic recognition task, while route knowledge was assessed using a scene ordering task. After completion of the study, individuals signed a post-trial consent form and were compensated $10 for their time. Results: NASA-TLX performance subscale ratings revealed that participants felt that they performed better during the world-relative condition but at a higher rate of perceived workload. However, in terms of perceived workload, results suggest there is no significant difference between interface design conditions. Landmark knowledge results suggest that the mean number of remembered scenes among both conditions is statistically similar, indicating participants using both interface designs remembered the same proportion of on-route scenes. Deviance analysis show that only maneuver direction had an influence on landmark knowledge testing performance. Route knowledge results suggest that the proportion of scenes on-route which were correctly sequenced by participants is similar under both conditions. Finally, participants exhibited poorer performance in the route knowledge task as compared to landmark knowledge task (independent of HUD interface design). Conclusions: This study described a driving simulator study which evaluated the head-up provision of two types of AR navigation interface designs. The world-relative condition placed an artificial post sign at the corner of an approaching intersection containing a real landmark. The screen-relative condition displayed turn directions using a screen-fixed traditional arrow located directly ahead of the participant on the right or left side on the HUD. Overall results of this initial study provide evidence that the use of both screen-relative and world-relative AR head-up display interfaces have similar impact on spatial knowledge acquisition and perceived workload while driving. These results contrast a common perspective in the AR community that conformal, world-relative graphics are inherently more effective. This study instead suggests that simple, screen-fixed designs may indeed be effective in certain contexts. 
    more » « less
  2. Urban Search and Rescue (USAR) missions often involve a need to complete tasks in hazardous environments. In such situations, human-robot teams (HRT) may be essential tools for future USAR missions. Transparency and explanation are two information exchange processes where transparency is real-time information exchange and explanation is not. For effective HRTs, certain levels of transparency and explanation must be met, but how can these modes of team communication be operationalized? During the COVID-19 pandemic, our approach to answering this question involved an iterative design process that factored in our research objectives as inputs and pilot studies with remote participants. Our final research testbed design resulted in converting an in-person task environment to a completely remote study and task environment. Changes to the study environment included: utilizing user-friendly video conferencing tools such as Zoom and a custom-built application for research administration tasks and improved modes of HRT communication that helped us avoid confounding our performance measures. 
    more » « less
  3. null (Ed.)
    Recent work has shown that smartphone-based augmented reality technology (AR) has the potential to be leveraged by people who are blind or visually impaired (BVI) for indoor navigation. The fact that this technology is low-cost, widely available, and portable further amplifies the opportunities for impact. However, when utilizing AR for navigation, there are many possible ways to communicate the spatial information encoded in the AR world to the user, and the choice of how this information is presented to the user may have profound effects on the usability of this information for navigation. In this paper we describe frameworks from the field of spatial cognition, discuss important results in spatial cognition for folks who are BVI, and use these results and frameworks to lay out possible user interface paradigms for AR-based navigation technology for people who are BVI.We also present findings from a route dataset collected from an AR-based navigation application that support the urgency of considering spatial cognition when developing AR technology for people who are BVI. 
    more » « less
  4. Abstract. While data on human behavior in COVID-19 rich environments have been captured and publicly released, spatial components of such data are recorded in two-dimensions. Thus, the complete roles of the built and natural environment cannot be readily ascertained. This paper introduces a mechanism for the three-dimensional (3D) visualization of egress behaviors of individuals leaving a COVID-19 exposed healthcare facility in Spring 2020 in New York City. Behavioral data were extracted and projected onto a 3D aerial laser scanning point cloud of the surrounding area rendered with Potree, a readily available open-source Web Graphics Library (WebGL) point cloud viewer. The outcomes were 3D heatmap visualizations of the built environment that indicated the event locations of individuals exhibiting specific characteristics (e.g., men vs. women; public transit users vs. private vehicle users). These visualizations enabled interactive navigation through the space accessible through any modern web browser supporting WebGL. Visualizing egress behavior in this manner may highlight patterns indicative of correlations between the environment, human behavior, and transmissible diseases. Findings using such tools have the potential to identify high-exposure areas and surfaces such as doors, railings, and other physical features. Providing flexible visualization capabilities with 3D spatial context can enable analysts to quickly advise and communicate vital information across a broad range of use cases. This paper presents such an application to extract the public health information necessary to form localized responses to reduce COVID-19 infection and transmission rates in urban areas. 
    more » « less
  5. Urban Search and Rescue (USAR) missions continue to benefit from the incorporation of human–robot teams (HRTs). USAR environments can be ambiguous, hazardous, and unstable. The integration of robot teammates into USAR missions has enabled human teammates to access areas of uncertainty, including hazardous locations. For HRTs to be effective, it is pertinent to understand the factors that influence team effectiveness, such as having shared goals, mutual understanding, and efficient communication. The purpose of our research is to determine how to (1) better establish human trust, (2) identify useful levels of robot transparency and robot explanations, (3) ensure situation awareness, and (4) encourage a bipartisan role amongst teammates. By implementing robot transparency and robot explanations, we found that the driving factors for effective HRTs rely on robot explanations that are context-driven and are readily available to the human teammate. 
    more » « less