skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Postural developments modulate children’s visual access to social information
The ability to process social information is a critical component of children’s early language and cognitive development. However, as children reach their first birthday, they begin to locomote themselves, dramatically affecting their visual access to this information. How do these postural and locomotor changes affect children’s access to the social information relevant for word-learning? Here, we explore this question by using head-mounted cameras to record 36 infants’ (8-16 months of age) egocentric visual perspective and use computer vision algorithms to estimate the proportion of faces and hands in infants’ environments. We find that infants’ posture and orientation to their caregiver modulates their access to social information, confirming previous work that suggests motoric developments play a significant role in the emergence of children’s linguistic and social capacities. We suggest that the combined use of head-mounted cameras and the application of new computer vision techniques is a promising avenue for understanding the statistics of infants’ visual and linguistic experience.  more » « less
Award ID(s):
1714726
PAR ID:
10127823
Author(s) / Creator(s):
Date Published:
Journal Name:
Proceedings of the 40th Annual Conference of the Cognitive Science Society.
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. ABSTRACT Most studies of developing visual attention are conducted using screen‐based tasks in which infants move their eyes to select where to look. However, real‐world visual exploration entails active movements of both eyes and head to bring relevant areas in view. Thus, relatively little is known about how infants coordinate their eyes and heads to structure their visual experiences. Infants were tested every 3 months from 9 to 24 months while they played with their caregiver and three toys while sitting in a highchair at a table. Infants wore a head‐mounted eye tracker that measured eye movement toward each of the visual targets (caregiver's face and toys) and how targets were oriented within the head‐centered field of view (FOV). With age, infants increasingly aligned novel toys in the center of their head‐centered FOV at the expense of their caregiver's face. Both faces and toys were better centered in view during longer looking events, suggesting that infants of all ages aligned their eyes and head to sustain attention. The bias in infants’ head‐centered FOV could not be accounted for by manual action: Held toys were more poorly centered compared with non‐held toys. We discuss developmental factors—attentional, motoric, cognitive, and social—that may explain why infants increasingly adopted biased viewpoints with age. 
    more » « less
  2. Abstract Traditionally, the exogenous control of gaze by external saliencies and the endogenous control of gaze by knowledge and context have been viewed as competing systems, with late infancy seen as a period of strengthening top‐down control over the vagaries of the input. Here we found that one‐year‐old infants control sustained attention through head movements that increase the visibility of the attended object. Freely moving one‐year‐old infants ( n  = 45) wore head‐mounted eye trackers and head motion sensors while exploring sets of toys of the same physical size. The visual size of the objects, a well‐documented salience, varied naturally with the infant's moment‐to‐moment posture and head movements. Sustained attention to an object was characterized by the tight control of head movements that created and then stabilized a visual size advantage for the attended object for sustained attention. The findings show collaboration between exogenous and endogenous attentional systems and suggest new hypotheses about the development of sustained visual attention. 
    more » « less
  3. Across the lifespan, humans are biased to look first at what is easy to see, with a handful of well-documented visual saliences shaping our attention (e.g., Itti & Koch, 2001). These attentional biases may emerge from the contexts in which moment-tomoment attention occurs, where perceivers and their social partners actively shape bottom-up saliences, moving their bodies and objects to make targets of interest more salient. The goal of the present study was to determine the bottom-up saliences present in infant egocentric images and to provide evidence on the role that infants and their mature social partners play in highlighting targets of interest via these saliences. We examined 968 unique scenes in which an object had purposefully been placed in the infant’s egocentric view, drawn from videos created by one-year-old infants wearing a head camera during toy-play with a parent. To understand which saliences mattered in these scenes, we conducted a visual search task, asking participants (n = 156) to find objects in the egocentric images. To connect this to the behaviors of perceivers, we then characterized the saliences of objects placed by infants or parents compared to objects that were otherwise present in the scenes. Our results show that body-centric properties, such as increases in the centering and visual size of the object, as well as decreases in the number of competing objects immediately surrounding it, both predicted faster search time and distinguished placed and unplaced objects. The present results suggest that the bottom-up saliences that can be readily controlled by perceivers and their social partners may most strongly impact our attention. This finding has implications for the functional role of saliences in human vision, their origin, the social structure of perceptual environments, and how the relation between bottom-up and top-down control of attention in these environments may support infant learning. 
    more » « less
  4. Children rely on their approximate number system (ANS) to guess quantities from a young age. Studies have shown that older children displayed better ANS performance. However, previous research did not provide an explanation for this ANS improvement. We show that children’s development in ANS is primarily driven by improved attentional control and awareness of peripheral information. Children guess the number of dots on a computer screen while being eye-tracked in our experiment. The behavioral and eye-tracking results provide supporting evidence for our account. Our analysis shows that children estimate better under the longer display-time condition and more visual foveation, with the effect of visual foveation mediating that of time. It also shows that older children make fewer underestimations because they are better at directing their attention and gaze toward areas of interest, and they are also more aware of dots in their peripheral vision. Our finding suggests that the development of children’s ANS is significantly impacted by the development of children’s nonnumerical cognitive abilities. 
    more » « less
  5. To investigate preferences for mobile and wearable sound awareness systems, we conducted an online survey with 201 DHH participants. The survey explores how demographic factors affect perceptions of sound awareness technologies, gauges interest in specific sounds and sound characteristics, solicits reactions to three design scenarios (smartphone, smartwatch, head-mounted display) and two output modalities (visual, haptic), and probes issues related to social context of use. While most participants were highly interested in being aware of sounds, this interest was modulated by communication preference--that is, for sign or oral communication or both. Almost all participants wanted both visual and haptic feedback and 75% preferred to have that feedback on separate devices (e.g., haptic on smartwatch, visual on head-mounted display). Other findings related to sound type, full captions vs. keywords, sound filtering, notification styles, and social context provide direct guidance for the design of future mobile and wearable sound awareness systems. 
    more » « less