skip to main content


This content will become publicly available on July 14, 2024

Title: Embodied mixed reality with passive haptics in STEM education: randomized control study with chemistry titration

Researchers, educators, and multimedia designers need to better understand how mixing physical tangible objects with virtual experiences affects learning and science identity. In this novel study, a 3D-printed tangible that is an accurate facsimile of the sort of expensive glassware that chemists use in real laboratories is tethered to a laptop with a digitized lesson. Interactive educational content is increasingly being placed online, it is important to understand the educational boundary conditions associated with passive haptics and 3D-printed manipulables. Cost-effective printed objects would be particularly welcome in rural and low Socio-Economic (SES) classrooms. A Mixed Reality (MR) experience was created that used a physical 3D-printed haptic burette to control a computer-based chemistry titration experiment. This randomized control trial study with 136 college students had two conditions: 1) low-embodied control (using keyboard arrows), and 2) high-embodied experimental (physically turning a valve/stopcock on the 3D-printed burette). Although both groups displayed similar significant gains on the declarative knowledge test, deeper analyses revealed nuanced Aptitude by Treatment Interactions (ATIs). These interactionsfavored the high-embodied experimental group that used the MR devicefor both titration-specific posttest knowledge questions and for science efficacy and science identity. Those students with higher prior science knowledge displayed higher titration knowledge scores after using the experimental 3D-printed haptic device. A multi-modal linguistic and gesture analysis revealed that during recall the experimental participants used the stopcock-turning gesture significantly more often, and their recalls created a significantly different Epistemic Network Analysis (ENA). ENA is a type of 2D projection of the recall data, stronger connections were seen in the high embodied group mainly centering on the key hand-turning gesture. Instructors and designers should consider the multi-modal and multi-dimensional nature of the user interface, and how the addition of another sensory-based learning signal (haptics) might differentially affect lower prior knowledge students. One hypothesis is that haptically manipulating novel devices during learning may create more cognitive load. For low prior knowledge students, it may be advantageous for them to begin learning content on a more ubiquitous interface (e.g., keyboard) before moving them to more novel, multi-modal MR devices/interfaces.

 
more » « less
Award ID(s):
1917912
NSF-PAR ID:
10476637
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
Frontiers in Virtual Reality
Date Published:
Journal Name:
Frontiers in Virtual Reality
Volume:
4
ISSN:
2673-4192
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    While there is increased interest in using movement and embodiment to support learning due to the rise in theories of embodied cognition and learning, additional work needs to be done to explore how we can make sense of students collectively developing their understanding within a mixed-reality environment. In this paper, we explore embodied communication’s individual and collective functions as a way of seeing students’ learning through embodiment. We analyze data from a mixed-reality (MR) environment: Science through Technology Enhanced Play (STEP) (Danish et al., International Journal of Computer-Supported Collaborative Learning 15:49–87, 2020), using descriptive statistics and interaction analysis to explore the role of gesture and movement in student classroom activities and their pre-and post-interviews. The results reveal that students appear to develop gestures for representing challenging concepts within the classroom and then use these gestures to help clarify their understanding within the interview context. We further explore how students collectively develop these gestures in the classroom, with a focus on their communicative acts, then provide a list of individual and collective functions that are supported by student gestures and embodiment within the STEP MR environment, and discuss the functions of each act. Finally, we illustrate the value of attending to these gestures for educators and designers interested in supporting embodied learning.

     
    more » « less
  2. Immersive Virtual Environments (IVEs) incorporating tangibles are becoming more accessible. The success of applications combining 3D printed tangibles and VR often depends on how accurately size is perceived. Research has shown that visuo-haptic perceptual information is important in the perception of size. However, it is unclear how these sensory-perceptual channels are affected by immersive virtual environments that incorporate tangible objects. Towards understanding the effects of different sensory information channels in the near field size perception of tangibles of graspable sizes in IVEs, we conducted a between-subjects study evaluating the accuracy of size perception across three experimental conditions (Vision-only, Haptics-only, Vision and Haptics). We found that overall, participants consistently over-estimated the size of the dials regardless of the type of perceptual information that was presented. Participants in the haptics only condition overestimated diameters to a larger degree as compared to other conditions. Participants were most accurate in the vision only condition and least accurate in the haptics only condition. Our results also revealed that increased efficiency in reporting size over time was most pronounced in the visuo- haptic condition. 
    more » « less
  3. To better prepare future generations, knowledge about computers and programming are one of the many skills that are part of almost all Science, Technology, Engineering, and Mathematic programs; however, teaching and learning programming is a complex task that is generally considered difficult by students and teachers alike. One approach to engage and inspire students from a variety of backgrounds is the use of educational robots. Unfortunately, previous research presents mixed results on the effectiveness of educational robots on student learning. One possibility for this lack of clarity may be because students have a wide variety of styles of learning. It is possible that the use of kinesthetic feedback, in addition to the normally used visual feedback, may improve learning with educational robots by providing a richer, multi-modal experience that may appeal to a larger number of students with different learning styles. It is also possible, however, that the addition of kinesthetic feedback, and how it may interfere with the visual feedback, may decrease a student’s ability to interpret the program commands being executed by a robot, which is critical for program debugging. In this work, we investigated whether human participants were able to accurately determine a sequence of program commands performed by a robot when both kinesthetic and visual feedback were being used together. Command recall and end point location determination were compared to the typically used visual-only method, as well as a narrative description. Results from 10 sighted participants indicated that individuals were able to accurately determine a sequence of movement commands and their magnitude when using combined kinesthetic + visual feedback. Participants’ recall accuracy of program commands was actually better with kinesthetic + visual feedback than just visual feedback. Although the recall accuracy was even better with the narrative description, this was primarily due to participants confusing an absolute rotation command with a relative rotation command with the kinesthetic + visual feedback. Participants’ zone location accuracy of the end point after a command was executed was significantly better for both the kinesthetic + visual feedback and narrative methods compared to the visual-only method. Together, these results suggest that the use of both kinesthetic + visual feedback improves an individual’s ability to interpret program commands, rather than decreases it. 
    more » « less
  4. Haptic technology has the potential to expand and transform the ways that students can experience a variety of science, technology, engineering, and math (STEM) topics. Designing kinesthetic haptic devices for educational applications is challenging because of the competing objectives of using low-cost components, making the device robust enough to be handled by students, and the desire to render high fidelity haptic virtual environments. In this paper, we present the evolution of a device called "Hapkit": a low cost, one-degree-of-freedom haptic kit that can be assembled by students. From 2013-2015, different versions of Hapkit were used in courses as a tool to teach haptics, physics, and control. These include a Massive Open Online Course (MOOC), two undergraduate courses, a graduate course, and a middle school class. Based on our experience using Hapkit in these educational environments, we evolved the design in terms of its structural materials, drive mechanism, and mechatronic components. Our latest design, Hapkit 3.0, includes several features that allow students to manufacture and assemble a robust and high-fidelity haptic device. First, it uses 3-D printed plastic structural material, which allows the design to be built and customized using readily available tools. Second, the design takes into account the limitations of 3-D printing, such as warping during printing and poor tolerances. This is achieved at a materials cost of approximately US $50, which makes it feasible for distribution in classroom and online education settings. The open source design is available at http://hapkit.stanford.edu. 
    more » « less
  5. Over the last decade, extensive growth in digital educational content has opened up new opportunities for teaching and learning. Despite such advancements, digital learning experiences often omit one ofour richest and earliest learning modalities - touch. This lack of haptic (touch) interaction creates a growing gap in supporting inclusive, embodied learning experiences digitally. Our research centers on the development ofinclusive learning tools that can flexibly adapt for use in different learning contexts to support learners with a wide range of needs, co-designed with students with disabilities. In this paper, we focus on the development of a tangible device for geometry learning - the Tangible Manipulative for Quadrilaterals (TMQ). We detail the design evolution of the TMQ and present two user studies investigating the affordances o ft h eI M and the user strategies employed when explored in isolation and in tandem with a two-dimensional touchscreen-based rendering ofa quadrilateral. Findings illustrate the affordances of the I M Oo v e r traditional, static media and its ability to serve as an inclusive geometry learning tool. 
    more » « less