skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, July 11 until 2:00 AM ET on Saturday, July 12 due to maintenance. We apologize for the inconvenience.


Title: Emerging Adults Mirror Infants’ Emotions and Yawns
ABSTRACT Infants’ nonverbal expressions—a broad smile or a sharp cry—are powerful at eliciting reactions. Although parents’ reactions to their own infants’ expressions are relatively well understood, here we studied whether adults more generally exhibit behavioral and physiological reactions tounfamiliarinfants producing various expressions. We recruited U.S. emerging adults (N = 84) prior to parenthood, 18–25 years old, 68% women, ethnically (20% Hispanic/Latino) and racially (7% Asian, 13% Black, 1% Middle Eastern, 70% White, 8% multiracial) diverse. They observed four 80‐s audio–video clips of unfamiliar 2‐ to 6‐month‐olds crying, smiling, yawning, and sitting calmly (emotionally neutral control). Each compilation video depicted 9 different infants (36 clips total). We found adults mirrored behaviorally and physiologically: more positive facial expressions to infants smiling, and more negative facial expressions and pupil dilation—indicating increases in arousal—to infants crying. Adults also yawned more and had more pupil dilation when observing infants yawning. Together, these findings suggest that even nonparent emerging adults are highly sensitive to unfamiliar infants’ expressions, which they naturally “catch” (i.e., behaviorally and physiologically mirror), even without instructions. Such sensitivity may have—over the course of humans’ evolutionary history—been selected for, to facilitate adults’ processing of preverbal infants’ expressions to meet their needs.  more » « less
Award ID(s):
1653737
PAR ID:
10535131
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  ;  ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
Developmental Psychobiology
Volume:
66
Issue:
6
ISSN:
0012-1630
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract In childhood, higher levels of temperamental fear—an early‐emerging proclivity to distress in the face of novelty—are associated with lower social responsivity and greater social anxiety. While the early emergence of temperamental fear in infancy is poorly understood, it is theorized to be driven by individual differences in reactivity and self‐regulation to novel stimuli. The current study used eye tracking to capture infants’ (N = 124) reactions to a video of a smiling stranger—a common social encounter—including infant gaze aversions from the stranger's face (indexing arousal regulation) and pupil dilation (indexing physiological reactivity), longitudinally at 2, 4, 6, and 8 months of age. Multilevel mixed‐effects models indicated that more fearful infants took more time to look away from a smiling stranger's face than less fearful infants, suggesting that high‐fear infants may have slower arousal regulation. At 2 and 4 months, more fearful infants also exhibited greater and faster pupil dilation before gaze aversions, consistent with greater physiological reactivity. Together, these findings suggest that individual differences in infants’ gaze aversions and pupil dilation can index the development of fearful temperament in early infancy, facilitating the identification of, and interventions for, risk factors to social disruptions. 
    more » « less
  2. Oxytocin is a neuropeptide positively associated with prosociality in adults. Here, we studied whether infants' salivary oxytocin can be reliably measured, is developmentally stable, and is linked to social behavior. We longitudinally collected saliva from 62 U.S. infants (44 % female, 56 % Hispanic/Latino, 24 % Black, 18 % non-Hispanic White, 11 % multiracial) at 4, 8, and 14 months of age and offline-video-coded the valence of their facial affect in response to a video of a smiling woman. We also captured infants' affective reactions in terms of excitement/joyfulness during a live, structured interaction with a singing woman in the Early Social Communication Scales at 14 months. We detected stable individual differences in infants' oxytocin levels over time (over minutes and months) and in infants' positive affect over months and across contexts (video-based and in live interactions). We detected no statistically significant changes in oxytocin levels between 4 and 8 months but found an increase from 8 to 14 months. Infants with higher oxytocin levels showed more positive facial affect to a smiling person video at 4 months; however, this association disappeared at 8 months, and reversed at 14 months (i.e., higher oxytocin was associated with less positive facial affect). Infant salivary oxytocin may be a reliable physiological measure of individual differences related to socio-emotional development. 
    more » « less
  3. This paper demonstrates the utility of ambient-focal attention and pupil dilation dynamics to describe visual processing of emotional facial expressions. Pupil dilation and focal eye movements reflect deeper cognitive processing and thus shed more light on the dy- namics of emotional expression recognition. Socially anxious in- dividuals (N = 24) and non-anxious controls (N = 24) were asked to recognize emotional facial expressions that gradually morphed from a neutral expression to one of happiness, sadness, or anger in 10-sec animations. Anxious cohorts exhibited more ambient face scanning than their non-anxious counterparts. We observed a positive relationship between focal fixations and pupil dilation, indi- cating deeper processing of viewed faces, but only by non-anxious participants, and only during the last phase of emotion recognition. Group differences in the dynamics of ambient-focal attention sup- port the hypothesis of vigilance to emotional expression processing by socially anxious individuals. We discuss the results by referring to current literature on cognitive psychopathology. 
    more » « less
  4. Abstract Infants vary in their ability to follow others’ gazes, but it is unclear how these individual differences emerge. We tested whether social motivation levels in early infancy predict later gaze following skills. We longitudinally tracked infants’ (N = 82) gazes and pupil dilation while they observed videos of a woman looking into the camera simulating eye contact (i.e., mutual gaze) and then gazing toward one of two objects, at 2, 4, 6, 8, and 14 months of age. To improve measurement validity, we used confirmatory factor analysis to combine multiple observed measures to index the underlying constructs of social motivation and gaze following. Infants’ social motivation—indexed by their speed of social orienting, duration of mutual gaze, and degree of pupil dilation during mutual gaze—was developmentally stable and positively predicted the development of gaze following—indexed by their proportion of time looking to the target object, first object look difference scores, and first face‐to‐object saccade difference scores—from 6 to 14 months of age. These findings suggest that infants’ social motivation likely plays a role in the development of gaze following and highlight the use of a multi‐measure approach to improve measurement sensitivity and validity in infancy research. 
    more » « less
  5. In general, people tend to identify the emotions of others from their facial expressions, however recent findings suggest that we may be more accurate when we hear someone’s voice than when we look only at their facial expression. The study reported in the paper examined whether these findings hold true for animated agents. A total of 37 subjects participated in the study: 19 males, 14 females, and 4 of non-specified gender. Subjects were asked to view 18 video stimuli; 9 clips featured a male agent and 9 clips a female agent. Each agent showed 3 different facial expressions (happy, angry, neutral), each one paired with 3 different voice lines spoken in three different tones (happy, angry, neutral). Hence, in some clips the agent’s tone of voice and facial expression were congruent, while in some videos they were not. Subjects answered questions regarding the emotion they believed the agent was feeling and rated the emotion intensity, typicality, and sincerity. Findings showed that emotion recognition rate and ratings of emotion intensity, typicality and sincerity were highest when the agent’s face and voice were congruent. However, when the channels were incongruent, subjects identified the emotion more accurately from the agent’s facial expression than the tone of voice. 
    more » « less