ABSTRACT Most studies of developing visual attention are conducted using screen‐based tasks in which infants move their eyes to select where to look. However, real‐world visual exploration entails active movements of both eyes and head to bring relevant areas in view. Thus, relatively little is known about how infants coordinate their eyes and heads to structure their visual experiences. Infants were tested every 3 months from 9 to 24 months while they played with their caregiver and three toys while sitting in a highchair at a table. Infants wore a head‐mounted eye tracker that measured eye movement toward each of the visual targets (caregiver's face and toys) and how targets were oriented within the head‐centered field of view (FOV). With age, infants increasingly aligned novel toys in the center of their head‐centered FOV at the expense of their caregiver's face. Both faces and toys were better centered in view during longer looking events, suggesting that infants of all ages aligned their eyes and head to sustain attention. The bias in infants’ head‐centered FOV could not be accounted for by manual action: Held toys were more poorly centered compared with non‐held toys. We discuss developmental factors—attentional, motoric, cognitive, and social—that may explain why infants increasingly adopted biased viewpoints with age. 
                        more » 
                        « less   
                    
                            
                            Postural developments modulate children’s visual access to social information
                        
                    
    
            The ability to process social information is a critical component of children’s early language and cognitive development. However, as children reach their first birthday, they begin to locomote themselves, dramatically affecting their visual access to this information. How do these postural and locomotor changes affect children’s access to the social information relevant for word-learning? Here, we explore this question by using head-mounted cameras to record 36 infants’ (8-16 months of age) egocentric visual perspective and use computer vision algorithms to estimate the proportion of faces and hands in infants’ environments. We find that infants’ posture and orientation to their caregiver modulates their access to social information, confirming previous work that suggests motoric developments play a significant role in the emergence of children’s linguistic and social capacities. We suggest that the combined use of head-mounted cameras and the application of new computer vision techniques is a promising avenue for understanding the statistics of infants’ visual and linguistic experience. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1714726
- PAR ID:
- 10127823
- Date Published:
- Journal Name:
- Proceedings of the 40th Annual Conference of the Cognitive Science Society.
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Abstract Traditionally, the exogenous control of gaze by external saliencies and the endogenous control of gaze by knowledge and context have been viewed as competing systems, with late infancy seen as a period of strengthening top‐down control over the vagaries of the input. Here we found that one‐year‐old infants control sustained attention through head movements that increase the visibility of the attended object. Freely moving one‐year‐old infants ( n = 45) wore head‐mounted eye trackers and head motion sensors while exploring sets of toys of the same physical size. The visual size of the objects, a well‐documented salience, varied naturally with the infant's moment‐to‐moment posture and head movements. Sustained attention to an object was characterized by the tight control of head movements that created and then stabilized a visual size advantage for the attended object for sustained attention. The findings show collaboration between exogenous and endogenous attentional systems and suggest new hypotheses about the development of sustained visual attention.more » « less
- 
            Across the lifespan, humans are biased to look first at what is easy to see, with a handful of well-documented visual saliences shaping our attention (e.g., Itti & Koch, 2001). These attentional biases may emerge from the contexts in which moment-tomoment attention occurs, where perceivers and their social partners actively shape bottom-up saliences, moving their bodies and objects to make targets of interest more salient. The goal of the present study was to determine the bottom-up saliences present in infant egocentric images and to provide evidence on the role that infants and their mature social partners play in highlighting targets of interest via these saliences. We examined 968 unique scenes in which an object had purposefully been placed in the infant’s egocentric view, drawn from videos created by one-year-old infants wearing a head camera during toy-play with a parent. To understand which saliences mattered in these scenes, we conducted a visual search task, asking participants (n = 156) to find objects in the egocentric images. To connect this to the behaviors of perceivers, we then characterized the saliences of objects placed by infants or parents compared to objects that were otherwise present in the scenes. Our results show that body-centric properties, such as increases in the centering and visual size of the object, as well as decreases in the number of competing objects immediately surrounding it, both predicted faster search time and distinguished placed and unplaced objects. The present results suggest that the bottom-up saliences that can be readily controlled by perceivers and their social partners may most strongly impact our attention. This finding has implications for the functional role of saliences in human vision, their origin, the social structure of perceptual environments, and how the relation between bottom-up and top-down control of attention in these environments may support infant learning.more » « less
- 
            Children rely on their approximate number system (ANS) to guess quantities from a young age. Studies have shown that older children displayed better ANS performance. However, previous research did not provide an explanation for this ANS improvement. We show that children’s development in ANS is primarily driven by improved attentional control and awareness of peripheral information. Children guess the number of dots on a computer screen while being eye-tracked in our experiment. The behavioral and eye-tracking results provide supporting evidence for our account. Our analysis shows that children estimate better under the longer display-time condition and more visual foveation, with the effect of visual foveation mediating that of time. It also shows that older children make fewer underestimations because they are better at directing their attention and gaze toward areas of interest, and they are also more aware of dots in their peripheral vision. Our finding suggests that the development of children’s ANS is significantly impacted by the development of children’s nonnumerical cognitive abilities.more » « less
- 
            Abstract Humans detect faces efficiently from a young age. Face detection is critical for infants to identify and learn from relevant social stimuli in their environments. Faces with eye contact are an especially salient stimulus, and attention to the eyes in infancy is linked to the emergence of later sociality. Despite the importance of both of these early social skills—attending to faces and attending to the eyes—surprisingly little is known about how they interact. We used eye tracking to explore whether eye contact influences infants' face detection. Longitudinally, we examined 2‐, 4‐, and 6‐month‐olds' (N = 65) visual scanning of complex image arrays with human and animal faces varying in eye contact and head orientation. Across all ages, infants displayed superior detection of faces with eye contact; however, this effect varied as a function of species and head orientation. Infants were more attentive to human than animal faces and were more sensitive to eye and head orientation for human faces compared to animal faces. Unexpectedly, human faces with both averted heads and eyes received the most attention. This pattern may reflect the early emergence of gaze following—the ability to look where another individual looks—which begins to develop around this age. Infants may be especially interested in averted gaze faces, providing early scaffolding for joint attention. This study represents the first investigation to document infants' attention patterns to faces systematically varying in their attentional states. Together, these findings suggest that infants develop early, specialized functional conspecific face detection.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    