skip to main content


Title: Mitigating the Costs of Spatial Transformations With a Situation Awareness Augmented Reality Display: Assistance for the Joint Terminal Attack Controller 3-17
Objective Evaluate and model the advantage of a situation awareness (SA) supported by an augmented reality (AR) display for the ground-based joint terminal attack Controller (JTAC), in judging and describing the spatial relations between objects in a hostile zone. Background The accurate world-referenced description of relative locations of surface objects, when viewed from an oblique slant angle (aircraft, observation post) is hindered by (1) the compression of the visual scene, amplified at a lower slang angle, (2) the need for mental rotation, when viewed from a non-northerly orientation. Approach Participants viewed a virtual reality (VR)-simulated four-object scene from either of two slant angles, at each of four compass orientations, either unaided, or aided by an AR head-mounted display (AR-HMD), depicting the scene from a top-down (avoiding compression) and north-up (avoiding mental rotation) perspective. They described the geographical layout of four objects within the display. Results Compared with the control condition, that condition supported by the north-up SA display shortened the description time, particularly on non-northerly orientations (9 s, 30% benefit), and improved the accuracy of description, particularly for the more compressed scene (lower slant angle), as fit by a simple computational model. Conclusion The SA display provides large, significant benefits to this critical phase of ground-air communications in managing an attack—as predicted by the task analysis of the JTAC. Application Results impact the design of the AR-HMD to support combat ground-air communications and illustrate the magnitude by which basic cognitive principles “scale up” to realistically simulated real-world tasks such as search and rescue.  more » « less
Award ID(s):
1948254
NSF-PAR ID:
10283603
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Human Factors: The Journal of the Human Factors and Ergonomics Society
ISSN:
0018-7208
Page Range / eLocation ID:
001872082110224
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Objective

    This study used a virtual environment to examine how older and younger pedestrians responded to simulated augmented reality (AR) overlays that indicated the crossability of gaps in a continuous stream of traffic.

    Background

    Older adults represent a vulnerable group of pedestrians. AR has the potential to make the task of street-crossing safer and easier for older adults.

    Method

    We used an immersive virtual environment to conduct a study with age group and condition as between-subjects factors. In the control condition, older and younger participants crossed a continuous stream of traffic without simulated AR overlays. In the AR condition, older and younger participants crossed with simulated AR overlays signaling whether gaps between vehicles were safe or unsafe to cross. Participants were subsequently interviewed about their experience.

    Results

    We found that participants were more selective in their crossing decisions and took safer gaps in the AR condition as compared to the control condition. Older adult participants also reported reduced mental and physical demand in the AR condition compared to the control condition.

    Conclusion

    AR overlays that display the crossability of gaps between vehicles have the potential to make street-crossing safer and easier for older adults. Additional research is needed in more complex real-world scenarios to further examine how AR overlays impact pedestrian behavior.

    Application

    With rapid advances in autonomous vehicle and vehicle-to-pedestrian communication technologies, it is critical to study how pedestrians can be better supported. Our research provides key insights for ways to improve pedestrian safety applications using emerging technologies like AR.

     
    more » « less
  2. Dini, Petre (Ed.)
    The National Academy of Engineering’s “Fourteen Grand Challenges for Engineering in the Twenty-First Century” identifies challenges in science and technology that are both feasible and sustainable to help people and the planet prosper. Four of these challenges are: advance personalized learning, enhance virtual reality, make solar energy affordable and provide access to clean water. In this work, the authors discuss developing of applications using immersive technologies, such as Virtual Reality (VR) and Augmented Reality (AR) and their significance in addressing four of the challenges. The Drinking Water AR mobile application helps users easily locate drinking water sources inside Auburn University (AU) campus, thus providing easy access to clean water. The Sun Path mobile application helps users visualize Sun’s path at any given time and location. Students study Sun path in various fields but often have a hard time visualizing and conceptualizing it, therefore the application can help. Similarly, the application could possibly assist the users in efficient solar panel placement. Architects often study Sun path to evaluate solar panel placement at a particular location. An effective solar panel placement helps optimize degree of efficiency of using the solar energy. The Solar System Oculus Quest VR application enables users in viewing all eight planets and the Sun in the solar system. Planets are simulated to mimic their position, scale, and rotation relative to the Sun. Using the Oculus Quest controllers, disguised as human hands in the scene, users can teleport within the world view, and can get closer to each planet and the Sun to have a better view of the objects and the text associated with the objects. As a result, tailored learning is aided, and Virtual Reality is enhanced. In a camp held virtually, due to Covid-19, K12 students were introduced to the concept and usability of the applications. Likert scales metric was used to assess the efficacy of application usage. The data shows that participants of this camp benefited from an immersive learning experience that allowed for simulation with inclusion of VR and AR. 
    more » « less
  3. Background: Drivers gather most of the information they need to drive by looking at the world around them and at visual displays within the vehicle. Navigation systems automate the way drivers navigate. In using these systems, drivers offload both tactical (route following) and strategic aspects (route planning) of navigational tasks to the automated SatNav system, freeing up cognitive and attentional resources that can be used in other tasks (Burnett, 2009). Despite the potential benefits and opportunities that navigation systems provide, their use can also be problematic. For example, research suggests that drivers using SatNav do not develop as much environmental spatial knowledge as drivers using paper maps (Waters & Winter, 2011; Parush, Ahuvia, & Erev, 2007). With recent growth and advances of augmented reality (AR) head-up displays (HUDs), there are new opportunities to display navigation information directly within a driver’s forward field of view, allowing them to gather information needed to navigate without looking away from the road. While the technology is promising, the nuances of interface design and its impacts on drivers must be further understood before AR can be widely and safely incorporated into vehicles. Specifically, an impact that warrants investigation is the role of AR HUDS in spatial knowledge acquisition while driving. Acquiring high levels of spatial knowledge is crucial for navigation tasks because individuals who have greater levels of spatial knowledge acquisition are more capable of navigating based on their own internal knowledge (Bolton, Burnett, & Large, 2015). Moreover, the ability to develop an accurate and comprehensive cognitive map acts as a social function in which individuals are able to navigate for others, provide verbal directions and sketch direction maps (Hill, 1987). Given these points, the relationship between spatial knowledge acquisition and novel technologies such as AR HUDs in driving is a relevant topic for investigation. Objectives: This work explored whether providing conformal AR navigational cues improves spatial knowledge acquisition (as compared to traditional HUD visual cues) to assess the plausibility and justification for investment in generating larger FOV AR HUDs with potentially multiple focal planes. Methods: This study employed a 2x2 between-subjects design in which twenty-four participants were counterbalanced by gender. We used a fixed base, medium fidelity driving simulator for where participants drove while navigating with one of two possible HUD interface designs: a world-relative arrow post sign and a screen-relative traditional arrow. During the 10-15 minute drive, participants drove the route and were encouraged to verbally share feedback as they proceeded. After the drive, participants completed a NASA-TLX questionnaire to record their perceived workload. We measured spatial knowledge at two levels: landmark and route knowledge. Landmark knowledge was assessed using an iconic recognition task, while route knowledge was assessed using a scene ordering task. After completion of the study, individuals signed a post-trial consent form and were compensated $10 for their time. Results: NASA-TLX performance subscale ratings revealed that participants felt that they performed better during the world-relative condition but at a higher rate of perceived workload. However, in terms of perceived workload, results suggest there is no significant difference between interface design conditions. Landmark knowledge results suggest that the mean number of remembered scenes among both conditions is statistically similar, indicating participants using both interface designs remembered the same proportion of on-route scenes. Deviance analysis show that only maneuver direction had an influence on landmark knowledge testing performance. Route knowledge results suggest that the proportion of scenes on-route which were correctly sequenced by participants is similar under both conditions. Finally, participants exhibited poorer performance in the route knowledge task as compared to landmark knowledge task (independent of HUD interface design). Conclusions: This study described a driving simulator study which evaluated the head-up provision of two types of AR navigation interface designs. The world-relative condition placed an artificial post sign at the corner of an approaching intersection containing a real landmark. The screen-relative condition displayed turn directions using a screen-fixed traditional arrow located directly ahead of the participant on the right or left side on the HUD. Overall results of this initial study provide evidence that the use of both screen-relative and world-relative AR head-up display interfaces have similar impact on spatial knowledge acquisition and perceived workload while driving. These results contrast a common perspective in the AR community that conformal, world-relative graphics are inherently more effective. This study instead suggests that simple, screen-fixed designs may indeed be effective in certain contexts. 
    more » « less
  4. Puig Puig, Anna and (Ed.)
    Motivated by the potential of holographic augmented reality (AR) to offer an immersive 3D appreciation of morphology and anatomy, the purpose of this work is to develop and assess an interface for image-based planning of prostate interventions with a head-mounted display (HMD). The computational system is a data and command pipeline that links a magnetic resonance imaging (MRI) scanner/data and the operator, that includes modules dedicated to image processing and segmentation, structure rendering, trajectory planning and spatial co-registration. The interface was developed with the Unity3D Engine (C#) and deployed and tested on a HoloLens HMD. For ergonomics in the surgical suite, the system was endowed with hands-free interactive manipulation of images and the holographic scene via hand gestures and voice commands. The system was tested in silico using MRI and ultrasound datasets of prostate phantoms. The holographic AR scene rendered by the HoloLens HMD was subjectively found superior to desktop-based volume or 3D rendering with regard to structure detection and appreciation of spatial relationships, planning access paths and manual co-registration of MRI and Ultrasound. By inspecting the virtual trajectory superimposed to rendered structures and MR images, the operator observes collisions of the needle path with vital structures (e.g. urethra) and adjusts accordingly. Holographic AR interfacing with wireless HMD endowed with hands-free gesture and voice control is a promising technology. Studies need to systematically assess the clinical merit of such systems and needed functionalities. 
    more » « less
  5. This is one of the first accounts for the security analysis of consumer immersive Virtual Reality (VR) systems. This work breaks new ground, coins new terms, and constructs proof of concept implementations of attacks related to immersive VR. Our work used the two most widely adopted immersive VR systems, the HTC Vive, and the Oculus Rift. More specifically, we were able to create attacks that can potentially disorient users, turn their Head Mounted Display (HMD) camera on without their knowledge, overlay images in their field of vision, and modify VR environmental factors that force them into hitting physical objects and walls. Finally, we illustrate through a human participant deception study the success of being able to exploit VR systems to control immersed users and move them to a location in physical space without their knowledge. We term this the Human Joystick Attack. We conclude our work with future research directions and ways to enhance the security of these systems. 
    more » « less