skip to main content


Title: The effects of presentation method and simulation fidelity on psychomotor education in a bimanual metrology training simulation
In this study, we empirically evaluated the effects of presentation method and simulation fidelity on task performance and psychomotor skills acquisition in an immersive bimanual simulation towards precision metrology education. In a 2 × 2 experiment design, we investigated a large-screen immersive display (LSID) with a head-mounted display (HMD), and the presence versus absence of gravity. Advantages of the HMD include interacting with the simulation in a more natural manner as compared to using a large-screen immersive display due to the similarities between the interactions afforded in the virtual compared to the real-world task. Suspending the laws of physics may have an effect on usability and in turn could affect learning outcomes. Our dependent variables consisted of a pre and post cognition questionnaire, quantitative performance measures, perceived workload and system usefulness, and a psychomotor assessment to measure to what extent transfer of learning took place from the virtual to the real world. Results indicate that the HMD condition was preferable to the immersive display in several metrics while the no-gravity condition resulted in users adopting strategies that were not advantageous for task performance.  more » « less
Award ID(s):
1104181
NSF-PAR ID:
10219553
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
2017 IEEE Symposium on 3D User Interfaces (3DUI)
Page Range / eLocation ID:
59 to 68
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Objective Evaluate and model the advantage of a situation awareness (SA) supported by an augmented reality (AR) display for the ground-based joint terminal attack Controller (JTAC), in judging and describing the spatial relations between objects in a hostile zone. Background The accurate world-referenced description of relative locations of surface objects, when viewed from an oblique slant angle (aircraft, observation post) is hindered by (1) the compression of the visual scene, amplified at a lower slang angle, (2) the need for mental rotation, when viewed from a non-northerly orientation. Approach Participants viewed a virtual reality (VR)-simulated four-object scene from either of two slant angles, at each of four compass orientations, either unaided, or aided by an AR head-mounted display (AR-HMD), depicting the scene from a top-down (avoiding compression) and north-up (avoiding mental rotation) perspective. They described the geographical layout of four objects within the display. Results Compared with the control condition, that condition supported by the north-up SA display shortened the description time, particularly on non-northerly orientations (9 s, 30% benefit), and improved the accuracy of description, particularly for the more compressed scene (lower slant angle), as fit by a simple computational model. Conclusion The SA display provides large, significant benefits to this critical phase of ground-air communications in managing an attack—as predicted by the task analysis of the JTAC. Application Results impact the design of the AR-HMD to support combat ground-air communications and illustrate the magnitude by which basic cognitive principles “scale up” to realistically simulated real-world tasks such as search and rescue. 
    more » « less
  2. Background: Drivers gather most of the information they need to drive by looking at the world around them and at visual displays within the vehicle. Navigation systems automate the way drivers navigate. In using these systems, drivers offload both tactical (route following) and strategic aspects (route planning) of navigational tasks to the automated SatNav system, freeing up cognitive and attentional resources that can be used in other tasks (Burnett, 2009). Despite the potential benefits and opportunities that navigation systems provide, their use can also be problematic. For example, research suggests that drivers using SatNav do not develop as much environmental spatial knowledge as drivers using paper maps (Waters & Winter, 2011; Parush, Ahuvia, & Erev, 2007). With recent growth and advances of augmented reality (AR) head-up displays (HUDs), there are new opportunities to display navigation information directly within a driver’s forward field of view, allowing them to gather information needed to navigate without looking away from the road. While the technology is promising, the nuances of interface design and its impacts on drivers must be further understood before AR can be widely and safely incorporated into vehicles. Specifically, an impact that warrants investigation is the role of AR HUDS in spatial knowledge acquisition while driving. Acquiring high levels of spatial knowledge is crucial for navigation tasks because individuals who have greater levels of spatial knowledge acquisition are more capable of navigating based on their own internal knowledge (Bolton, Burnett, & Large, 2015). Moreover, the ability to develop an accurate and comprehensive cognitive map acts as a social function in which individuals are able to navigate for others, provide verbal directions and sketch direction maps (Hill, 1987). Given these points, the relationship between spatial knowledge acquisition and novel technologies such as AR HUDs in driving is a relevant topic for investigation. Objectives: This work explored whether providing conformal AR navigational cues improves spatial knowledge acquisition (as compared to traditional HUD visual cues) to assess the plausibility and justification for investment in generating larger FOV AR HUDs with potentially multiple focal planes. Methods: This study employed a 2x2 between-subjects design in which twenty-four participants were counterbalanced by gender. We used a fixed base, medium fidelity driving simulator for where participants drove while navigating with one of two possible HUD interface designs: a world-relative arrow post sign and a screen-relative traditional arrow. During the 10-15 minute drive, participants drove the route and were encouraged to verbally share feedback as they proceeded. After the drive, participants completed a NASA-TLX questionnaire to record their perceived workload. We measured spatial knowledge at two levels: landmark and route knowledge. Landmark knowledge was assessed using an iconic recognition task, while route knowledge was assessed using a scene ordering task. After completion of the study, individuals signed a post-trial consent form and were compensated $10 for their time. Results: NASA-TLX performance subscale ratings revealed that participants felt that they performed better during the world-relative condition but at a higher rate of perceived workload. However, in terms of perceived workload, results suggest there is no significant difference between interface design conditions. Landmark knowledge results suggest that the mean number of remembered scenes among both conditions is statistically similar, indicating participants using both interface designs remembered the same proportion of on-route scenes. Deviance analysis show that only maneuver direction had an influence on landmark knowledge testing performance. Route knowledge results suggest that the proportion of scenes on-route which were correctly sequenced by participants is similar under both conditions. Finally, participants exhibited poorer performance in the route knowledge task as compared to landmark knowledge task (independent of HUD interface design). Conclusions: This study described a driving simulator study which evaluated the head-up provision of two types of AR navigation interface designs. The world-relative condition placed an artificial post sign at the corner of an approaching intersection containing a real landmark. The screen-relative condition displayed turn directions using a screen-fixed traditional arrow located directly ahead of the participant on the right or left side on the HUD. Overall results of this initial study provide evidence that the use of both screen-relative and world-relative AR head-up display interfaces have similar impact on spatial knowledge acquisition and perceived workload while driving. These results contrast a common perspective in the AR community that conformal, world-relative graphics are inherently more effective. This study instead suggests that simple, screen-fixed designs may indeed be effective in certain contexts. 
    more » « less
  3. Abstract

    Augmented reality (AR) enhances the user’s perception of the real environment by superimposing virtual images generated by computers. These virtual images provide additional visual information that complements the real-world view. AR systems are rapidly gaining popularity in various manufacturing fields such as training, maintenance, assembly, and robot programming. In some AR applications, it is crucial for the invisible virtual environment to be precisely aligned with the physical environment to ensure that human users can accurately perceive the virtual augmentation in conjunction with their real surroundings. The process of achieving this accurate alignment is known as calibration. During some robotics applications using AR, we observed instances of misalignment in the visual representation within the designated workspace. This misalignment can potentially impact the accuracy of the robot’s operations during the task. Based on the previous research on AR-assisted robot programming systems, this work investigates the sources of misalignment errors and presents a simple and efficient calibration procedure to reduce the misalignment accuracy in general video see-through AR systems. To accurately superimpose virtual information onto the real environment, it is necessary to identify the sources and propagation of errors. In this work, we outline the linear transformation and projection of each point from the virtual world space to the virtual screen coordinates. An offline calibration method is introduced to determine the offset matrix from the head-mounted display (HMD) to the camera, and experiments are conducted to validate the improvement achieved through the calibration process.

     
    more » « less
  4. Objective

    This study used a virtual environment to examine how older and younger pedestrians responded to simulated augmented reality (AR) overlays that indicated the crossability of gaps in a continuous stream of traffic.

    Background

    Older adults represent a vulnerable group of pedestrians. AR has the potential to make the task of street-crossing safer and easier for older adults.

    Method

    We used an immersive virtual environment to conduct a study with age group and condition as between-subjects factors. In the control condition, older and younger participants crossed a continuous stream of traffic without simulated AR overlays. In the AR condition, older and younger participants crossed with simulated AR overlays signaling whether gaps between vehicles were safe or unsafe to cross. Participants were subsequently interviewed about their experience.

    Results

    We found that participants were more selective in their crossing decisions and took safer gaps in the AR condition as compared to the control condition. Older adult participants also reported reduced mental and physical demand in the AR condition compared to the control condition.

    Conclusion

    AR overlays that display the crossability of gaps between vehicles have the potential to make street-crossing safer and easier for older adults. Additional research is needed in more complex real-world scenarios to further examine how AR overlays impact pedestrian behavior.

    Application

    With rapid advances in autonomous vehicle and vehicle-to-pedestrian communication technologies, it is critical to study how pedestrians can be better supported. Our research provides key insights for ways to improve pedestrian safety applications using emerging technologies like AR.

     
    more » « less
  5. This paper investigates a concept called Virtual Ability Simulation (VAS) for people with disability due to Multiple Sclerosis (MS), in a virtual reality (VR) environment. In a VAS people with a disability perform tasks that are made easier in the virtual environment (VE) compared to the real world. We hypothesized that putting people with disabilities in a VAS will increase confidence and enable more efficient task completion. To investigate this hypothesis, we conducted a within-subjects experiment in which participants performed a virtual task called ''kick the ball'' in two different conditions: a no gain condition (i.e., same difficulty as in the real world) and a rotational gain condition (i.e., physically easier than the real world but visually the same). The results from our study suggest that VAS increased participants' confidence which in turn enables them to perceive the difficulty of the same task easier. 
    more » « less