skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 5:00 PM ET until 11:00 PM ET on Friday, June 21 due to maintenance. We apologize for the inconvenience.


This content will become publicly available on September 4, 2024

Title: Controlling the input: How one‐year‐old infants sustain visual attention
Abstract Traditionally, the exogenous control of gaze by external saliencies and the endogenous control of gaze by knowledge and context have been viewed as competing systems, with late infancy seen as a period of strengthening top‐down control over the vagaries of the input. Here we found that one‐year‐old infants control sustained attention through head movements that increase the visibility of the attended object. Freely moving one‐year‐old infants ( n  = 45) wore head‐mounted eye trackers and head motion sensors while exploring sets of toys of the same physical size. The visual size of the objects, a well‐documented salience, varied naturally with the infant's moment‐to‐moment posture and head movements. Sustained attention to an object was characterized by the tight control of head movements that created and then stabilized a visual size advantage for the attended object for sustained attention. The findings show collaboration between exogenous and endogenous attentional systems and suggest new hypotheses about the development of sustained visual attention.  more » « less
Award ID(s):
1842817
NSF-PAR ID:
10461735
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Developmental Science
ISSN:
1363-755X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    The present article investigated the composition of different joint gaze components used to operationalize various types of coordinated attention between parents and infants and which types of coordinated attention were associated with future vocabulary size. Twenty‐five 9‐month‐old infants and their parents wore head‐mounted eye trackers as they played with objects together. With high‐density gaze data, a variety of coordinated attention bout types were quantitatively measured by combining different gaze components, such as mutual gaze, joint object looks, face looks, and triadic gaze patterns. The key components of coordinated attention that were associated with vocabulary size at 12 and 15 months included the simultaneous combination of parent triadic gaze and infant object looking. The results from this article are discussed in terms of the importance of parent attentional monitoring and infant sustained attention for language development.

     
    more » « less
  2. Early intervention to address developmental disability in infants has the potential to promote improved outcomes in neurodevelopmental structure and function [1]. Researchers are starting to explore Socially Assistive Robotics (SAR) as a tool for delivering early interventions that are synergistic with and enhance human-administered therapy. For SAR to be effective, the robot must be able to consistently attract the attention of the infant in order to engage the infant in a desired activity. This work presents the analysis of eye gaze tracking data from five 6-8 month old infants interacting with a Nao robot that kicked its leg as a contingent reward for infant leg movement. We evaluate a Bayesian model of lowlevel surprise on video data from the infants’ head-mounted camera and on the timing of robot behaviors as a predictor of infant visual attention. The results demonstrate that over 67% of infant gaze locations were in areas the model evaluated to be more surprising than average. We also present an initial exploration using surprise to predict the extent to which the robot attracts infant visual attention during specific intervals in the study. This work is the first to validate the surprise model on infants; our results indicate the potential for using surprise to inform robot behaviors that attract infant attention during SAR interactions. 
    more » « less
  3. Abstract

    Object names are a major component of early vocabularies and learning object names depends on being able to visually recognize objects in the world. However, the fundamental visual challenge of the moment‐to‐moment variations in object appearances that learners must resolve has received little attention in word learning research. Here we provide the first evidence that image‐level object variability matters and may be the link that connects infant object manipulation to vocabulary development. Using head‐mounted eye tracking, the present study objectively measured individual differences in the moment‐to‐moment variability of visual instances of the same object, from infants’ first‐person views. Infants who generated more variable visual object images through manual object manipulation at 15 months of age experienced greater vocabulary growth over the next six months. Elucidating infants’ everyday visual experiences with objects may constitute a crucial missing link in our understanding of the developmental trajectory of object name learning.

     
    more » « less
  4. Abstract

    Parental responsiveness to infant behaviors is a strong predictor of infants' language and cognitive outcomes. The mechanisms underlying this effect, however, are relatively unknown. We examined the effects of parent speech on infants' visual attention, manual actions, hand‐eye coordination, and dyadic joint attention during parent‐infant free play. We report on two studies that used head‐mounted eye trackers in increasingly naturalistic laboratory environments. In Study 1, 12‐to‐24‐month‐old infants and their parents played on the floor of a seminaturalistic environment with 24 toys. In Study 2, a different sample of dyads played in a home‐like laboratory with 10 toys and no restrictions on their movement. In both studies, we present evidence that responsive parent speech extends the duration of infants' multimodal attention. This social “boost” of parent speech impacts multiple behaviors that have been linked to later outcomes—visual attention, manual actions, hand‐eye coordination, and joint attention. Further, the amount that parents talked during the interaction was negatively related to the effects of parent speech on infant attention. Together, these results provide evidence of a trade‐off between quantity of speech and its effects, suggesting multiple pathways through which parents impact infants' multimodal attention to shape the moment‐by‐moment dynamics of an interaction.

     
    more » « less
  5. Spatial ability is the ability to generate, store, retrieve, and transform visual information to mentally represent a space and make sense of it. This ability is a critical facet of human cognition that affects knowledge acquisition, productivity, and workplace safety. Although having improved spatial ability is essential for safely navigating and perceiving a space on earth, it is more critical in altered environments of other planets and deep space, which may pose extreme and unfamiliar visuospatial conditions. Such conditions may range from microgravity settings with the misalignment of body and visual axes to a lack of landmark objects that offer spatial cues to perceive size, distance, and speed. These altered visuospatial conditions may pose challenges to human spatial cognitive processing, which assists humans in locating objects in space, perceiving them visually, and comprehending spatial relationships between the objects and surroundings. The main goal of this paper is to examine if eye-tracking data of gaze pattern can indicate whether such altered conditions may demand more mental efforts and attention. The key dimensions of spatial ability (i.e., spatial visualization, spatial relations, and spatial orientation) are examined under the three simulated conditions: (1) aligned body and visual axes (control group); (2) statically misaligned body and visual axes (experiment group I); and dynamically misaligned body and visual axes (experiment group II). The three conditions were simulated in Virtual Reality (VR) using Unity 3D game engine. Participants were recruited from Texas A&M University student population who wore HTC VIVE Head-Mounted Displays (HMDs) equipped with eye-tracking technology to work on three spatial tests to measure spatial visualization, orientation, and relations. The Purdue Spatial Visualization Test: Rotations (PSVT: R), the Mental Cutting Test (MCT), and the Perspective Taking Ability (PTA) test were used to evaluate the spatial visualization, spatial relations, and spatial orientation of 78 participants, respectively. For each test, gaze data was collected through Tobii eye-tracker integrated in the HTC Vive HMDs. Quick eye movements, known as saccades, were identified by analyzing raw eye-tracking data using the rate of change of gaze position over time as a measure of mental effort. The results showed that the mean number of saccades in MCT and PSVT: R tests was statistically larger in experiment group II than in the control group or experiment group I. However, PTA test data did not meet the required assumptions to compare the mean number of saccades in the three groups. The results suggest that spatial relations and visualization may require more mental effort under dynamically misaligned idiotropic and visual axes than aligned or statically misaligned idiotropic and visual axes. However, the data could not reveal whether spatial orientation requires more/less mental effort under aligned, statically misaligned, and dynamically misaligned idiotropic and visual axes. The results of this study are important to understand how altered visuospatial conditions impact spatial cognition and how simulation- or game-based training tools can be developed to train people in adapting to extreme or altered work environments and working more productively and safely.

     
    more » « less