skip to main content

Title: What Can We Learn from Augmented Reality (AR)?
Emerging technologies such as Augmented Reality (AR), have the potential to radically transform education by making challenging concepts visible and accessible to novices. In this project, we have designed a Hololens-based system in which collaborators are exposed to an unstructured learning activity in which they learned about the invisible physics involved in audio speakers. They learned topics ranging from spatial knowledge, such as shape of magnetic fields, to abstract conceptual knowledge, such as relationships between electricity and magnetism. We compared participants' learning, attitudes and collaboration with a tangible interface through multiple experimental conditions containing varying layers of AR information. We found that educational AR representations were beneficial for learning specific knowledge and increasing participants' self-efficacy (i.e., their ability to learn concepts in physics). However, we also found that participants in conditions that did not contain AR educational content, learned some concepts better than other groups and became more curious about physics. We discuss learning and collaboration differences, as well as benefits and detriments of implementing augmented reality for unstructured learning activities.
Authors:
;
Award ID(s):
1748093
Publication Date:
NSF-PAR ID:
10101502
Journal Name:
What Can We Learn from Augmented Reality (AR)?
Page Range or eLocation-ID:
1 to 12
Sponsoring Org:
National Science Foundation
More Like this
  1. Augmented reality (AR) is a powerful visualization tool to support learning of scientific concepts across learners of various ages. AR can make information otherwise invisible visible in the physical world in real-time. In this study, we are looking at a subset of data from a larger study (N=120), in which participant pairs interacted with an augmented sound producing speaker. We explored the learning behaviors in eight pairs of learners (N=16) who participated in an unstructured physics activity under two conditions: with or without AR. Comparing behaviors between the two experimental conditions, we found that AR affected learning in four differentmore »ways: participants in the AR condition (1) learned more about visual concepts (ex: magnetic field structures) but learned less about nonvisual content (ex: relationship between electricity and physical movement); (2) stopped exploring the system faster than NonAR participants; (3) used less aids in exploration and teaching; and (4) spent less time in teaching their collaborators. We discuss implications of those results for designing collaborative learning activities with augmented reality.« less
  2. Augmented reality (AR) applications are growing in popularity in educational settings. While the effects of AR experiences on learning have been widely studied, there is relatively less research on understanding the impact of AR on the dynamics of co-located collaborative learning, specifically in the context of novices programming robots. Educational robotics are a powerful learning context because they engage students with problem solving, critical thinking, STEM (Science, Technology, Engineering, Mathematics) concepts, and collaboration skills. However, such collaborations can suffer due to students having unequal access to resources or dominant peers. In this research we investigate how augmented reality impacts learningmore »and collaboration while peers engage in robot programming activities. We use a mixed methods approach to measure how participants are learning, manipulating resources, and engaging in problem solving activities with peers. We investigate how these behaviors are impacted by the presence of augmented reality visualizations, and by participants? proximity to resources. We find that augmented reality improved overall group learning and collaboration. Detailed analysis shows that AR strongly helps one participant more than the other, by improving their ability to learn and contribute while remaining engaged with the robot. Furthermore, augmented reality helps both participants maintain a common ground and balance contributions during problem solving activities. We discuss the implications of these results for designing AR and non-AR collaborative interfaces.« less
  3. Augmented reality (AR) can be a useful educational tool which allows the representation of concepts that are otherwise invisible and difficult to visualize. We designed an augmented reality tool (the Holoboard) for learning about circuits and voltage, and deployed it in a summer school course for students to use. The students were hesitant to use the tool for several reasons, but those who did had a positive experience and found the tool to be helpful. Overall, tools were used by students who had an independent approach to problem-solving, and students preferred tools that were easily accessible and did not disruptmore »their workflow. We conclude with suggestions to improve the Holoboard to tailor it to the needs of students.« less
  4. Background: Drivers gather most of the information they need to drive by looking at the world around them and at visual displays within the vehicle. Navigation systems automate the way drivers navigate. In using these systems, drivers offload both tactical (route following) and strategic aspects (route planning) of navigational tasks to the automated SatNav system, freeing up cognitive and attentional resources that can be used in other tasks (Burnett, 2009). Despite the potential benefits and opportunities that navigation systems provide, their use can also be problematic. For example, research suggests that drivers using SatNav do not develop as much environmentalmore »spatial knowledge as drivers using paper maps (Waters & Winter, 2011; Parush, Ahuvia, & Erev, 2007). With recent growth and advances of augmented reality (AR) head-up displays (HUDs), there are new opportunities to display navigation information directly within a driver’s forward field of view, allowing them to gather information needed to navigate without looking away from the road. While the technology is promising, the nuances of interface design and its impacts on drivers must be further understood before AR can be widely and safely incorporated into vehicles. Specifically, an impact that warrants investigation is the role of AR HUDS in spatial knowledge acquisition while driving. Acquiring high levels of spatial knowledge is crucial for navigation tasks because individuals who have greater levels of spatial knowledge acquisition are more capable of navigating based on their own internal knowledge (Bolton, Burnett, & Large, 2015). Moreover, the ability to develop an accurate and comprehensive cognitive map acts as a social function in which individuals are able to navigate for others, provide verbal directions and sketch direction maps (Hill, 1987). Given these points, the relationship between spatial knowledge acquisition and novel technologies such as AR HUDs in driving is a relevant topic for investigation. Objectives: This work explored whether providing conformal AR navigational cues improves spatial knowledge acquisition (as compared to traditional HUD visual cues) to assess the plausibility and justification for investment in generating larger FOV AR HUDs with potentially multiple focal planes. Methods: This study employed a 2x2 between-subjects design in which twenty-four participants were counterbalanced by gender. We used a fixed base, medium fidelity driving simulator for where participants drove while navigating with one of two possible HUD interface designs: a world-relative arrow post sign and a screen-relative traditional arrow. During the 10-15 minute drive, participants drove the route and were encouraged to verbally share feedback as they proceeded. After the drive, participants completed a NASA-TLX questionnaire to record their perceived workload. We measured spatial knowledge at two levels: landmark and route knowledge. Landmark knowledge was assessed using an iconic recognition task, while route knowledge was assessed using a scene ordering task. After completion of the study, individuals signed a post-trial consent form and were compensated $10 for their time. Results: NASA-TLX performance subscale ratings revealed that participants felt that they performed better during the world-relative condition but at a higher rate of perceived workload. However, in terms of perceived workload, results suggest there is no significant difference between interface design conditions. Landmark knowledge results suggest that the mean number of remembered scenes among both conditions is statistically similar, indicating participants using both interface designs remembered the same proportion of on-route scenes. Deviance analysis show that only maneuver direction had an influence on landmark knowledge testing performance. Route knowledge results suggest that the proportion of scenes on-route which were correctly sequenced by participants is similar under both conditions. Finally, participants exhibited poorer performance in the route knowledge task as compared to landmark knowledge task (independent of HUD interface design). Conclusions: This study described a driving simulator study which evaluated the head-up provision of two types of AR navigation interface designs. The world-relative condition placed an artificial post sign at the corner of an approaching intersection containing a real landmark. The screen-relative condition displayed turn directions using a screen-fixed traditional arrow located directly ahead of the participant on the right or left side on the HUD. Overall results of this initial study provide evidence that the use of both screen-relative and world-relative AR head-up display interfaces have similar impact on spatial knowledge acquisition and perceived workload while driving. These results contrast a common perspective in the AR community that conformal, world-relative graphics are inherently more effective. This study instead suggests that simple, screen-fixed designs may indeed be effective in certain contexts.« less
  5. Augmented reality (AR) has the potential to fundamentally transform science education by making learning of abstract science ideas tangible and engaging. However, little is known about how students interacted with AR technologies and how these interactions may affect learning performance in science laboratories. This study examined high school students’ navigation patterns and science learning with a mobile AR technology, developed by the research team, in laboratory settings. The AR technology allows students to conduct hands-on laboratory experiments and interactively explore various science phenomena covering biology, chemistry, and physics concepts. In this study, seventy ninth-grade students carried out science laboratory experimentsmore »in pairs to learn thermodynamics. Our cluster analysis identified two groups of students, which differed significantly in navigation length and breadth. The two groups demonstrated unique navigation patterns that revealed students’ various ways of observing, describing, exploring, and evaluating science phenomena. These navigation patterns were associated with learning performance as measured by scores on lab reports. The results suggested the need for providing access to multiple representations and different types of interactions with these representations to support effective science learning as well as designing representations and connections between representations to cultivate scientific reasoning skills and nuanced understanding of scientific processes.« less