skip to main content

Title: The Experience and Effect of Adolescent to Robot Stress Disclosure: A Mixed-Methods Exploration
Social robots hold the potential to be an effective and appropriate technology in reducing stress and improving the mental health of adolescents. In order to understand the effect of adolescent-to-robot disclosure on momentary stress, we conducted an exploratory, mixed-methods study with sixty-nine US adolescents (ages 14–21) in school settings. We compared a generic, minimalist robot interaction among three different robot embodiments: physical, digital computer screen, and immersive, virtual reality. We found participants’ momentary stress levels significantly decreased across multiple interactions over time. The physical and virtual reality embodiments were most effective for stress reduction. In addition, our qualitative findings provide unique insights into the types of stressors adolescents shared with the social robots as well as their experiences with the different interaction embodiments.
Authors:
; ; ;
Award ID(s):
1734100
Publication Date:
NSF-PAR ID:
10292248
Journal Name:
International Conference on Social Robotics
Volume:
12483
Sponsoring Org:
National Science Foundation
More Like this
  1. Telepresence technology enables users to be virtually present in another location at the same time through video streaming. This kind of user interaction is further enhanced through mobility by driving remotely to form what is called a Telepresence robot. These innovative machines connect individuals with restricted mobility and increase social interaction, collaboration and active participation. However, operating and navigating these robots by individuals who have little knowledge and map of the remote environment is challenging. Avoiding obstacles via the narrow camera view and manual remote operation is a cumbersome task. Moreover, the users lack the sense of immersion while they are busy maneuvering via the real-time video feed and, thereby, decreasing their capability to handle different tasks. This demo presents a simultaneous mapping and autonomous driving virtual reality robot. Leveraging the 2D Lidar sensor, we generate two dimensional occupancy grid maps via SLAM and provide assisted navigation in reducing the onerous task of avoiding obstacles. The attitude of the robotic head with a camera is remotely controlled via the virtual reality headset. Remote users will be able to gain a visceral understanding of the environment while teleoperating the robot.
  2. In previous work, researchers in Human-Robot Interaction (HRI) have demonstrated that user trust in robots depends on effective and transparent communication. This may be particularly true for robots used for transportation, due to user reliance on such robots for physical movement and safety. In this paper, we present the design of an experiment examining the importance of proactive communication by robotic wheelchairs, as compared to non-vehicular mobile robots, within a Virtual Reality (VR) environment. Furthermore, we describe the specific advantages – and limitations – of conducting this type of HRI experiment in VR.
  3. Augmented Reality (AR) or Mixed Reality (MR) enables innovative interactions by overlaying virtual imagery over the physical world. For roboticists, this creates new opportunities to apply proven non-verbal interaction patterns, like gesture, to physically-limited robots. However, a wealth of HRI research has demonstrated that there are real benefits to physical embodiment (compared, e.g., to virtual robots displayed on screens). This suggests that AR augmentation of virtual robot parts could lead to similar challenges. In this work, we present the design of an experiment to objectively and subjectively compare the use of AR and physical arms for deictic gesture, in AR and physical task environments. Our future results will inform robot designers choosing between the use of physical and virtual arms, and provide new nuanced understanding of the use of mixed-reality technologies in HRI contexts. Index T
  4. In previous work, researchers have repeatedly demonstrated that robots' use of deictic gestures enables effective and natural human-robot interaction. However, new technologies such as augmented reality head mounted displays enable environments in which mixed-reality becomes possible, and in such environments, physical gestures become but one category among many different types of mixed reality deictic gestures. In this paper, we present the first experimental exploration of the effectiveness of mixed reality deictic gestures beyond physical gestures. Specifically, we investigate human perception of videos simulating the display of allocentric gestures, in which robots circle their targets in users' fields of view. Our results suggest that this is an effective communication strategy, both in terms of objective accuracy and subjective perception, especially when paired with complex natural language references.
  5. Background: Play is critical for children’s physical, cognitive, and social development. Technology-based toys like robots are especially of interest to children. This pilot study explores the affordances of the play area provided by developmentally appropriate toys and a mobile socially assistive robot (SAR). The objective of this study is to assess the role of the SAR on physical activity, play behavior, and toy-use behavior of children during free play. Methods: Six children (5 females, M age = 3.6 ± 1.9 years) participated in the majority of our pilot study’s seven 30-minute-long weekly play sessions (4 baseline and 3 intervention). During baseline sessions, the SAR was powered off. During intervention sessions, the SAR was teleoperated to move in the play area and offered rewards of lights, sounds, and bubbles to children. Thirty-minute videos of the play sessions were annotated using a momentary time sampling observation system. Mean percentage of time spent in behaviors of interest in baseline and intervention sessions were calculated. Paired-Wilcoxon signed rank tests were conducted to assess differences between baseline and intervention sessions. Results: There was a significant increase in children’s standing (∼15%; Z = −2.09; p = 0.037) and a tendency for less time sitting (∼19%; Z =more »−1.89; p = 0.059) in the intervention phase as compared to the baseline phase. There was also a significant decrease (∼4.5%, Z = −2.70; p = 0.007) in peer interaction play and a tendency for greater (∼4.5%, Z = −1.89; p = 0.059) interaction with adults in the intervention phase as compared to the baseline phase. There was a significant increase in children’s interaction with the robot (∼11.5%, Z = −2.52; p = 0.012) in the intervention phase as compared to the baseline phase. Conclusion: These results may indicate that a mobile SAR provides affordances through rewards that elicit children’s interaction with the SAR and more time standing in free play. This pilot study lays a foundation for exploring the role of SARs in inclusive play environments for children with and without mobility disabilities in real-world settings like day-care centers and preschools.« less