Early intervention to address developmental disability in infants has the potential to promote improved outcomes in neurodevelopmental structure and function [1]. Researchers are starting to explore Socially Assistive Robotics (SAR) as a tool for delivering early interventions that are synergistic with and enhance human-administered therapy. For SAR to be effective, the robot must be able to consistently attract the attention of the infant in order to engage the infant in a desired activity. This work presents the analysis of eye gaze tracking data from five 6-8 month old infants interacting with a Nao robot that kicked its leg as a contingent reward for infant leg movement. We evaluate a Bayesian model of lowlevel surprise on video data from the infants’ head-mounted camera and on the timing of robot behaviors as a predictor of infant visual attention. The results demonstrate that over 67% of infant gaze locations were in areas the model evaluated to be more surprising than average. We also present an initial exploration using surprise to predict the extent to which the robot attracts infant visual attention during specific intervals in the study. This work is the first to validate the surprise model on infants; our results indicate the potential for using surprise to inform robot behaviors that attract infant attention during SAR interactions.
more »
« less
Feasibility of Using the Robot Sphero to Promote Perceptual- Motor Exploration in Infants
Infant-robot interaction has been increasingly gaining attention, yet, there are limited studies on the development of robot-assisted environments that promote perceptual-motor development in infants. This paper assesses the feasibility of operating a spherical mobile robot, Sphero, to engage infants in perceptual-motor exploration of an open area. Two case scenarios were considered. In the first case, Sphero was the only robot providing stimuli in the environment. In the second case, two additional robots provided stimuli along with Sphero. Pilot data from two infants were analyzed to extract information on their visual attention to and physical interaction with Sphero, as well as their motor actions. Overall, infants (i) expressed a preference to Sphero regardless of stimulation levels, and (ii) moved out of stationary postures in an effort to chase and approach Sphero. These preliminary findings provide support for the future implementation of Sphero in robot-assisted learning environments to promote perceptual-motor development in infants.
more »
« less
- Award ID(s):
- 2014264
- PAR ID:
- 10366166
- Date Published:
- Journal Name:
- 022 17th ACM/IEEE International Conference on Human-Robot Interaction (
- Page Range / eLocation ID:
- 850-854
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In the language development literature, studies often make inferences about infants’ speech perception abilities based on their responses to a single speaker. However, there can be significant natural variability across speakers in how speech is produced (i.e., inter-speaker differences). The current study examined whether inter-speaker differences can affect infants’ ability to detect a mismatch between the auditory and visual components of vowels. Using an eye-tracker, 4.5-month-old infants were tested on auditory-visual (AV) matching for two vowels (/i/ and /u/). Critically, infants were tested with two speakers who naturally differed in how distinctively they articulated the two vowels within and across the categories. Only infants who watched and listened to the speaker whose visual articulations of the two vowels were most distinct from one another were sensitive to AV mismatch. This speaker also produced a visually more distinct /i/ as compared to the other speaker. This finding suggests that infants are sensitive to the distinctiveness of AV information across speakers, and that when making inferences about infants’ perceptual abilities, characteristics of the speaker should be taken into account.more » « less
-
Abstract Humans detect faces efficiently from a young age. Face detection is critical for infants to identify and learn from relevant social stimuli in their environments. Faces with eye contact are an especially salient stimulus, and attention to the eyes in infancy is linked to the emergence of later sociality. Despite the importance of both of these early social skills—attending to faces and attending to the eyes—surprisingly little is known about how they interact. We used eye tracking to explore whether eye contact influences infants' face detection. Longitudinally, we examined 2‐, 4‐, and 6‐month‐olds' (N = 65) visual scanning of complex image arrays with human and animal faces varying in eye contact and head orientation. Across all ages, infants displayed superior detection of faces with eye contact; however, this effect varied as a function of species and head orientation. Infants were more attentive to human than animal faces and were more sensitive to eye and head orientation for human faces compared to animal faces. Unexpectedly, human faces with both averted heads and eyes received the most attention. This pattern may reflect the early emergence of gaze following—the ability to look where another individual looks—which begins to develop around this age. Infants may be especially interested in averted gaze faces, providing early scaffolding for joint attention. This study represents the first investigation to document infants' attention patterns to faces systematically varying in their attentional states. Together, these findings suggest that infants develop early, specialized functional conspecific face detection.more » « less
-
Abstract Practicing complex locomotor skills, such as those involving a step sequence engages distinct perceptual and motor mechanisms that support the recall of learning under new conditions (i.e., skill transfer). While sleep has been shown to enhance learning of sequences of fine movements (i.e., sleep-dependent consolidation), here we examined whether this benefit extends to learning of a locomotor pattern. Specifically, we tested the perceptual and motor learning of a locomotor sequence following sleep compared to wake. We hypothesized that post-practice sleep would increase locomotor sequence learning in the perceptual, but not in the motor domain. In this study, healthy young adult participants (n = 48; 18–33 years) practiced a step length sequence on a treadmill cued by visual stimuli displayed on a screen during training. Participants were then tested in a perceptual condition (backward walking with the same visual stimuli), or a motor condition (forward walking but with an inverted screen). Skill was assessed immediately, and again after a 12-h delay following overnight sleep or daytime wake (n = 12 for each interval/condition). Off-line learning improved following sleep compared to wake, but only for the perceptual condition. Our results suggest that perceptual and motor sequence learning are processed separately after locomotor training, and further points to a benefit of sleep that is rooted in the perceptual as opposed to the motor aspects of motor learning.more » « less
-
Across the lifespan, humans are biased to look first at what is easy to see, with a handful of well-documented visual saliences shaping our attention (e.g., Itti & Koch, 2001). These attentional biases may emerge from the contexts in which moment-tomoment attention occurs, where perceivers and their social partners actively shape bottom-up saliences, moving their bodies and objects to make targets of interest more salient. The goal of the present study was to determine the bottom-up saliences present in infant egocentric images and to provide evidence on the role that infants and their mature social partners play in highlighting targets of interest via these saliences. We examined 968 unique scenes in which an object had purposefully been placed in the infant’s egocentric view, drawn from videos created by one-year-old infants wearing a head camera during toy-play with a parent. To understand which saliences mattered in these scenes, we conducted a visual search task, asking participants (n = 156) to find objects in the egocentric images. To connect this to the behaviors of perceivers, we then characterized the saliences of objects placed by infants or parents compared to objects that were otherwise present in the scenes. Our results show that body-centric properties, such as increases in the centering and visual size of the object, as well as decreases in the number of competing objects immediately surrounding it, both predicted faster search time and distinguished placed and unplaced objects. The present results suggest that the bottom-up saliences that can be readily controlled by perceivers and their social partners may most strongly impact our attention. This finding has implications for the functional role of saliences in human vision, their origin, the social structure of perceptual environments, and how the relation between bottom-up and top-down control of attention in these environments may support infant learning.more » « less
An official website of the United States government

