Abstract The present article investigated the composition of different joint gaze components used to operationalize various types of coordinated attention between parents and infants and which types of coordinated attention were associated with future vocabulary size. Twenty‐five 9‐month‐old infants and their parents wore head‐mounted eye trackers as they played with objects together. With high‐density gaze data, a variety of coordinated attention bout types were quantitatively measured by combining different gaze components, such as mutual gaze, joint object looks, face looks, and triadic gaze patterns. The key components of coordinated attention that were associated with vocabulary size at 12 and 15 months included the simultaneous combination of parent triadic gaze and infant object looking. The results from this article are discussed in terms of the importance of parent attentional monitoring and infant sustained attention for language development.
more »
« less
Controlling the input: How one‐year‐old infants sustain visual attention
Abstract Traditionally, the exogenous control of gaze by external saliencies and the endogenous control of gaze by knowledge and context have been viewed as competing systems, with late infancy seen as a period of strengthening top‐down control over the vagaries of the input. Here we found that one‐year‐old infants control sustained attention through head movements that increase the visibility of the attended object. Freely moving one‐year‐old infants ( n = 45) wore head‐mounted eye trackers and head motion sensors while exploring sets of toys of the same physical size. The visual size of the objects, a well‐documented salience, varied naturally with the infant's moment‐to‐moment posture and head movements. Sustained attention to an object was characterized by the tight control of head movements that created and then stabilized a visual size advantage for the attended object for sustained attention. The findings show collaboration between exogenous and endogenous attentional systems and suggest new hypotheses about the development of sustained visual attention.
more »
« less
- Award ID(s):
- 1842817
- PAR ID:
- 10461735
- Date Published:
- Journal Name:
- Developmental Science
- ISSN:
- 1363-755X
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Across the lifespan, humans are biased to look first at what is easy to see, with a handful of well-documented visual saliences shaping our attention (e.g., Itti & Koch, 2001). These attentional biases may emerge from the contexts in which moment-tomoment attention occurs, where perceivers and their social partners actively shape bottom-up saliences, moving their bodies and objects to make targets of interest more salient. The goal of the present study was to determine the bottom-up saliences present in infant egocentric images and to provide evidence on the role that infants and their mature social partners play in highlighting targets of interest via these saliences. We examined 968 unique scenes in which an object had purposefully been placed in the infant’s egocentric view, drawn from videos created by one-year-old infants wearing a head camera during toy-play with a parent. To understand which saliences mattered in these scenes, we conducted a visual search task, asking participants (n = 156) to find objects in the egocentric images. To connect this to the behaviors of perceivers, we then characterized the saliences of objects placed by infants or parents compared to objects that were otherwise present in the scenes. Our results show that body-centric properties, such as increases in the centering and visual size of the object, as well as decreases in the number of competing objects immediately surrounding it, both predicted faster search time and distinguished placed and unplaced objects. The present results suggest that the bottom-up saliences that can be readily controlled by perceivers and their social partners may most strongly impact our attention. This finding has implications for the functional role of saliences in human vision, their origin, the social structure of perceptual environments, and how the relation between bottom-up and top-down control of attention in these environments may support infant learning.more » « less
-
Abstract Object names are a major component of early vocabularies and learning object names depends on being able to visually recognize objects in the world. However, the fundamental visual challenge of the moment‐to‐moment variations in object appearances that learners must resolve has received little attention in word learning research. Here we provide the first evidence that image‐level object variability matters and may be the link that connects infant object manipulation to vocabulary development. Using head‐mounted eye tracking, the present study objectively measured individual differences in the moment‐to‐moment variability of visual instances of the same object, from infants’ first‐person views. Infants who generated more variable visual object images through manual object manipulation at 15 months of age experienced greater vocabulary growth over the next six months. Elucidating infants’ everyday visual experiences with objects may constitute a crucial missing link in our understanding of the developmental trajectory of object name learning.more » « less
-
Early intervention to address developmental disability in infants has the potential to promote improved outcomes in neurodevelopmental structure and function [1]. Researchers are starting to explore Socially Assistive Robotics (SAR) as a tool for delivering early interventions that are synergistic with and enhance human-administered therapy. For SAR to be effective, the robot must be able to consistently attract the attention of the infant in order to engage the infant in a desired activity. This work presents the analysis of eye gaze tracking data from five 6-8 month old infants interacting with a Nao robot that kicked its leg as a contingent reward for infant leg movement. We evaluate a Bayesian model of lowlevel surprise on video data from the infants’ head-mounted camera and on the timing of robot behaviors as a predictor of infant visual attention. The results demonstrate that over 67% of infant gaze locations were in areas the model evaluated to be more surprising than average. We also present an initial exploration using surprise to predict the extent to which the robot attracts infant visual attention during specific intervals in the study. This work is the first to validate the surprise model on infants; our results indicate the potential for using surprise to inform robot behaviors that attract infant attention during SAR interactions.more » « less
-
ABSTRACT Most studies of developing visual attention are conducted using screen‐based tasks in which infants move their eyes to select where to look. However, real‐world visual exploration entails active movements of both eyes and head to bring relevant areas in view. Thus, relatively little is known about how infants coordinate their eyes and heads to structure their visual experiences. Infants were tested every 3 months from 9 to 24 months while they played with their caregiver and three toys while sitting in a highchair at a table. Infants wore a head‐mounted eye tracker that measured eye movement toward each of the visual targets (caregiver's face and toys) and how targets were oriented within the head‐centered field of view (FOV). With age, infants increasingly aligned novel toys in the center of their head‐centered FOV at the expense of their caregiver's face. Both faces and toys were better centered in view during longer looking events, suggesting that infants of all ages aligned their eyes and head to sustain attention. The bias in infants’ head‐centered FOV could not be accounted for by manual action: Held toys were more poorly centered compared with non‐held toys. We discuss developmental factors—attentional, motoric, cognitive, and social—that may explain why infants increasingly adopted biased viewpoints with age.more » « less
An official website of the United States government

