skip to main content


This content will become publicly available on June 1, 2024

Title: Kinesthetic Feedback for Understanding Program Execution
To better prepare future generations, knowledge about computers and programming are one of the many skills that are part of almost all Science, Technology, Engineering, and Mathematic programs; however, teaching and learning programming is a complex task that is generally considered difficult by students and teachers alike. One approach to engage and inspire students from a variety of backgrounds is the use of educational robots. Unfortunately, previous research presents mixed results on the effectiveness of educational robots on student learning. One possibility for this lack of clarity may be because students have a wide variety of styles of learning. It is possible that the use of kinesthetic feedback, in addition to the normally used visual feedback, may improve learning with educational robots by providing a richer, multi-modal experience that may appeal to a larger number of students with different learning styles. It is also possible, however, that the addition of kinesthetic feedback, and how it may interfere with the visual feedback, may decrease a student’s ability to interpret the program commands being executed by a robot, which is critical for program debugging. In this work, we investigated whether human participants were able to accurately determine a sequence of program commands performed by a robot when both kinesthetic and visual feedback were being used together. Command recall and end point location determination were compared to the typically used visual-only method, as well as a narrative description. Results from 10 sighted participants indicated that individuals were able to accurately determine a sequence of movement commands and their magnitude when using combined kinesthetic + visual feedback. Participants’ recall accuracy of program commands was actually better with kinesthetic + visual feedback than just visual feedback. Although the recall accuracy was even better with the narrative description, this was primarily due to participants confusing an absolute rotation command with a relative rotation command with the kinesthetic + visual feedback. Participants’ zone location accuracy of the end point after a command was executed was significantly better for both the kinesthetic + visual feedback and narrative methods compared to the visual-only method. Together, these results suggest that the use of both kinesthetic + visual feedback improves an individual’s ability to interpret program commands, rather than decreases it.  more » « less
Award ID(s):
1742242
NSF-PAR ID:
10461741
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Sensors
Volume:
23
Issue:
11
ISSN:
1424-8220
Page Range / eLocation ID:
5159
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Physical interaction between humans and robots can help robots learn to perform complex tasks. The robot arm gains information by observing how the human kinesthetically guides it throughout the task. While prior works focus on how the robot learns, it is equally important that this learning is transparent to the human teacher. Visual displays that show the robot’s uncertainty can potentially communicate this information; however, we hypothesize that visual feedback mechanisms miss out on the physical connection between the human and robot. In this work we present a soft haptic display that wraps around and conforms to the surface of a robot arm, adding a haptic signal at an existing point of contact without significantly affecting the interaction. We demonstrate how soft actuation creates a salient haptic signal while still allowing flexibility in device mounting. Using a psychophysics experiment, we show that users can accurately distinguish inflation levels of the wrapped display with an average Weber fraction of 11.4%. When we place the wrapped display around the arm of a robotic manipulator, users are able to interpret and leverage the haptic signal in sample robot learning tasks, improving identification of areas where the robot needs more training and enabling the user to provide better demonstrations. See videos of our device and user studies here: https://youtu.be/tX-2Tqeb9Nw 
    more » « less
  2. Graphical representations are ubiquitous in the learning and teaching of science, technology, engineering, and mathematics (STEM). However, these materials are often not accessible to the over 547,000 students in the United States with blindness and significant visual impairment, creating barriers to pursuing STEM educational and career pathways. Furthermore, even when such materials are made available to visually impaired students, access is likely through literalized modes (e.g., braille, verbal description), which is problematic as these approaches (1) do not directly convey spatial information and (2) are different from the graphic-based materials used by students without visual impairment. The purpose of this study was to design and evaluate a universally accessible system for communicating graphical representations in STEM classes. By combining a multisensory vibro-audio interface and an app running on consumer mobile hardware, the system is meant to work equally well for all students, irrespective of their visual status. We report the design of the experimental system and the results of an experiment where we compared learning performance with the system to traditional (visual or tactile) diagrams for sighted participants (n = 20) and visually impaired participants (n =9) respectively. While the experimental multimodal diagrammatic system (MDS) did result in significant learning gains for both groups of participants, the results also revealed no statistically significant differences in the capacity for learning from graphical information across both comparison groups. Likewise, there were no statistically significant differences in the capacity for learning from graphical information between the stimuli presented through the experimental system and the traditional (visual or tactile) diagram control conditions, across either participant group. These findings suggest that both groups were able to learn graphical information from the experimental system as well as traditional diagram presentation materials. This learning modality was supported without the need for conversion of the diagrams to make them accessible for participants who required tactile materials. The system also provided additional multisensory information for sighted participants to interpret and answer questions about the diagrams. Findings are interpreted in terms of new universal design principles for producing multisensory graphical representations that would be accessible to all learners.

     
    more » « less
  3. Background: There are 4.9 million English Language Learners (ELLs) in the United States. Only 2% of educators are trained to support these vulnerable students. Social robots show promise for language acquisition and may provide valuable support for students, especially as we return to needing smaller classes due to COVID-19. While cultural responsiveness increases gains for ELLs, little is known about the design of culturally responsive child–robot interactions. Method: Therefore, using a participatory design approach, we conducted an exploratory study with 24 Spanish-speaking ELLs at a Pacific Northwest elementary school. As cultural informants, students participated in a 15-min, robot-led, small group story discussion followed by a post-interaction feedback session. We then conducted reflexive critiques with six ELL teachers who reviewed the group interactions to provide further interpretation on design feature possibilities and potential interactions with the robot. Results: Students found the social robot engaging, but many were hesitant to converse with the robot. During post-interaction dialogue students articulated the specific ways in which the social robot appearance and behavior could be modified to help them feel more comfortable. Teachers postulated that the social robot could be designed to engage students in peer-to-peer conversations. Teachers also recognized the ELLs verbosity when discussing their experiences with the robot and suggested such interactions could stimulate responsiveness from students. Conclusion: Cultural responsiveness is a key component to successful education in ELLs. However, integrating appropriate, cultural responsiveness into robot interactions may require participants as cultural informants to ensure the robot behaviors and interactions are situated in that educational community. Utilizing a participatory approach to engage ELLs in design decisions for social robots is a promising way to gather culturally responsive requirements to inform successful child–robot interactions. 
    more » « less
  4. null (Ed.)
    Augmented reality (AR) applications are growing in popularity in educational settings. While the effects of AR experiences on learning have been widely studied, there is relatively less research on understanding the impact of AR on the dynamics of co-located collaborative learning, specifically in the context of novices programming robots. Educational robotics are a powerful learning context because they engage students with problem solving, critical thinking, STEM (Science, Technology, Engineering, Mathematics) concepts, and collaboration skills. However, such collaborations can suffer due to students having unequal access to resources or dominant peers. In this research we investigate how augmented reality impacts learning and collaboration while peers engage in robot programming activities. We use a mixed methods approach to measure how participants are learning, manipulating resources, and engaging in problem solving activities with peers. We investigate how these behaviors are impacted by the presence of augmented reality visualizations, and by participants? proximity to resources. We find that augmented reality improved overall group learning and collaboration. Detailed analysis shows that AR strongly helps one participant more than the other, by improving their ability to learn and contribute while remaining engaged with the robot. Furthermore, augmented reality helps both participants maintain a common ground and balance contributions during problem solving activities. We discuss the implications of these results for designing AR and non-AR collaborative interfaces. 
    more » « less
  5. Abstract

    Communicating and interpreting uncertainty in ecological model predictions is notoriously challenging, motivating the need for new educational tools, which introduce ecology students to core concepts in uncertainty communication. Ecological forecasting, an emerging approach to estimate future states of ecological systems with uncertainty, provides a relevant and engaging framework for introducing uncertainty communication to undergraduate students, as forecasts can be used as decision support tools for addressing real‐world ecological problems and are inherently uncertain. To provide critical training on uncertainty communication and introduce undergraduate students to the use of ecological forecasts for guiding decision‐making, we developed a hands‐on teaching module within the Macrosystems Environmental Data‐Driven Inquiry and Exploration (EDDIE;MacrosystemsEDDIE.org) educational program. Our module used an active learning approach by embedding forecasting activities in an R Shiny application to engage ecology students in introductory data science, ecological modeling, and forecasting concepts without needing advanced computational or programming skills. Pre‐ and post‐module assessment data from more than 250 undergraduate students enrolled in ecology, freshwater ecology, and zoology courses indicate that the module significantly increased students' ability to interpret forecast visualizations with uncertainty, identify different ways to communicate forecast uncertainty for diverse users, and correctly define ecological forecasting terms. Specifically, students were more likely to describe visual, numeric, and probabilistic methods of uncertainty communication following module completion. Students were also able to identify more benefits of ecological forecasting following module completion, with the key benefits of using forecasts for prediction and decision‐making most commonly described. These results show promise for introducing ecological model uncertainty, data visualizations, and forecasting into undergraduate ecology curricula via software‐based learning, which can increase students' ability to engage and understand complex ecological concepts.

     
    more » « less