skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: School-age children are more skeptical of inaccurate robots than adults
We expect children to learn new words, skills, and ideas from various technologies. When learning from humans, children prefer people who are reliable and trustworthy, yet children also forgive people's occasional mistakes. Are the dynamics of children learning from technologies, which can also be unreliable, similar to learning from humans? We tackle this question by focusing on early childhood, an age at which children are expected to master foundational academic skills. In this project, 168 4–7-year-old children (Study 1) and 168 adults (Study 2) played a word-guessing game with either a human or robot. The partner first gave a sequence of correct answers, but then followed this with a sequence of wrong answers, with a reaction following each one. Reactions varied by condition, either expressing an accident, an accident marked with an apology, or an unhelpful intention. We found that older children were less trusting than both younger children and adults and were even more skeptical after errors. Trust decreased most rapidly when errors were intentional, but only children (and especially older children) outright rejected help from intentionally unhelpful partners. As an exception to this general trend, older children maintained their trust for longer when a robot (but not a human) apologized for its mistake. Our work suggests that educational technology design cannot be one size fits all but rather must account for developmental changes in children's learning goals.  more » « less
Award ID(s):
1955653
PAR ID:
10576147
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Elsevier
Date Published:
Journal Name:
Cognition
Volume:
249
Issue:
C
ISSN:
0010-0277
Page Range / eLocation ID:
105814
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Social robots are coming to our homes and have already been used to help humans in a number of ways in geriatric care. This article aims to develop a framework that enables social robots to conduct regular clinical screening interviews in geriatric care, such as cognitive evaluation, falls' risk evaluation, and pain rating. We develop a social robot with essential features to enable clinical screening interviews, including a conversational interface, face tracking, an interaction handler, attention management, robot skills, and cloud service management. Besides, a general clinical screening interview management (GCSIM) model is proposed and implemented. The GCSIM enables social robots to handle various types of clinical questions and answers, evaluate and score responses, engage interviewees during conversations, and generate reports on their well-being. These reports can be used to evaluate the progression of cognitive impairment, risk of falls, pain level, and so on by caregivers or physicians. Such a clinical screening capability allows for early detection and treatment planning in geriatric care. The framework was developed and implemented on our 3-D-printed social robot. It was tested on 30 older adults with different ages, achieved satisfying results, and received their high confidence and trust in the use of this robot for human well-being assessment. 
    more » « less
  2. In peer tutoring, the learner is taught by a colleague rather than by a traditional tutor. This strategy has been shown to be effective in human tutoring, where students have higher learning gains when taught by a peer instead of a traditional tutor. Similar results have been shown in child-robot interactions studies, where a peer robot was more effective than a tutor robot at teaching children. In this work, we compare skill increase and perception of a peer robot to a tutor robot when teaching adults. We designed a system in which a robot provides personalized help to adults in electronic circuit construction. We compare the number of learned skills and preferences of a peer robot to a tutor robot. Participants in both conditions improved their circuit skills after interacting with the robot. There were no significant differences in number of skills learned between conditions. However, participants with low prior domain knowledge learned significantly more with a peer robot than a tutor robot. Furthermore, the peer robot was perceived as friendlier, more social, smarter, and more respectful than the tutor robot, regardless of initial skill level. 
    more » « less
  3. Community intergenerational mentorship offers an opportunity to address older adults' social isolation while providing valuable one-on-one or small group learning experiences for elementary school students. Current organizations that support this kind of engagement focus on in-person visits that place the burden of logistics and transportation on the older adult. However, as older adults become less independent while aging, coming to schools in person becomes more challenging. We present a qualitative analysis of current intergenerational mentorship practices to understand opportunities for technology to expand access to this experience. We highlight elements critical for building successful mentorship: the importance of relationship building between older adults and children during mentoring activities, the skills mentors acquired to carry out mentoring activities, and support needed from teachers and schools. We contribute a rich description of current intergenerational mentorship practices and provide insights for opportunities for novel HCI technologies in this context. 
    more » « less
  4. For robots to seamlessly interact with humans, we first need to make sure that humans and robots understand one another. Diverse algorithms have been developed to enable robots to learn from humans (i.e., transferring information from humans to robots). In parallel, visual, haptic, and auditory communication interfaces have been designed to convey the robot’s internal state to the human (i.e., transferring information from robots to humans). Prior research often separates these two directions of information transfer, and focuses primarily on either learning algorithms or communication interfaces. By contrast, in this survey we take an interdisciplinary approach to identify common themes and emerging trends that close the loop between learning and communication. Specifically, we survey state-of-the-art methods and outcomes for communicating a robot’s learning back to the human teacher during human-robot interaction. This discussion connects human-in-the-loop learning methods and explainable robot learning with multimodal feedback systems and measures of human-robot interaction. We find that—when learning and communication are developed together—the resulting closed-loop system can lead to improved human teaching, increased human trust, and human-robot co-adaptation. The paper includes a perspective on several of the interdisciplinary research themes and open questions that could advance how future robots communicate their learning to everyday operators. Finally, we implement a selection of the reviewed methods in a case study where participants kinesthetically teach a robot arm. This case study documents and tests an integrated approach for learning in ways that can be communicated, conveying this learning across multimodal interfaces, and measuring the resulting changes in human and robot behavior. 
    more » « less
  5. null (Ed.)
    This paper presents preliminary research on whether children will accept a robot as part of their ingroup, and on how a robot's group membership affects trust, closeness, and social support. Trust is important in human-robot interactions because it affects if people will follow robots' advice. In this study, we randomly assigned 11- and 12-year-old participants to a condition such that participants were either on a team with the robot (ingroup) or were opponents of the robot (outgroup) for an online game. Thus far, we have eight participants in the ingroup condition. Our preliminary results showed that children had a low level of trust, closeness, and social support with the robot. Participants had a much more negative response than we anticipated. We speculate that there will be a more positive response with an in-person setting rather than a remote one. 
    more » « less