Positive emotions play a critical role in guiding human behavior and social interactions. This study examined whether and how genetic variability in the oxytocin system is linked to individual differences in expressing positive affect in human infants. Our results show that genetic variation in CD38 (rs3796863), previously linked to increased release of oxytocin, was associated with higher rates of positive affective displays among a sample of 7-month-old infants, using established parent-report measures. Moreover, infants displaying increased levels of positive affect (smiling and laughter) also showed enhanced brain responses in the right inferior frontal cortex, a brain region previously linked to perception–action coupling, when viewing others smile at them. These findings suggest that, from early in development, genetic variation in the oxytocin system is associated with individual differences in expressed positive affect, which in turn are linked to differences in perceiving positive affect. This helps uncover the neurobiological processes accounting for variability in the expression and perception of positive affect in infancy.
- Award ID(s):
- NSF-PAR ID:
- Date Published:
- Journal Name:
- Cerebral Cortex Communications
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
The early development of threat perception in infancy might be dependent on caregiver context, but this link has not yet been studied in human infants. This study examined the emergence of the young infant's response to threat in the context of variations in caregiving behavior. Eighty infant‐caregiver dyads (39 female infants, all of western European descent) visited the laboratory when the infant was 5 months old. Each dyad completed a free‐play task, from which we coded for the mother's level of engagement: the amount of talking, close proximity, positive affect, and attention directed toward the infant. When the infant was 7 months old, they came back to the laboratory and we used functional near infrared spectroscopy and eye tracking to measure infants’ neural and attentional responses to threatening angry faces. In response to threat, infants of more‐engaged mothers showed increased brain responses in the left dorsolateral prefrontal cortex—a brain region associated with emotion regulation and cognitive control among adults—and reduced attentional avoidance. These results point to a role for caregiver behavioral context in the early development of brain systems involved in human threat responding.
We propose a method for constructing generative models of 3D objects from a single 3D mesh and improving them through unsupervised low-shot learning from 2D images. Our method produces a 3D morphable model that represents shape and albedo in terms of Gaussian processes. Whereas previous approaches have typically built 3D morphable models from multiple high-quality 3D scans through principal component analysis, we build 3D morphable models from a single scan or template. As we demonstrate in the face domain, these models can be used to infer 3D reconstructions from 2D data (inverse graphics) or 3D data (registration). Specifically, we show that our approach can be used to perform face recognition using only a single 3D template (one scan total, not one per person). We extend our model to a preliminary unsupervised learning framework that enables the learning of the distribution of 3D faces using one 3D template and a small number of 2D images. Our approach is motivated as a potential model for the origins of face perception in human infants, who appear to start with an innate face template and subsequently develop a flexible system for perceiving the 3D structure of any novel face from experience with only 2D images of a relatively small number of familiar faces.
Does knowledge of other people's minds grow from concrete experience to abstract concepts? Cognitive scientists have hypothesized that infants’ first‐person experience, acting on their own goals, leads them to understand others’ actions and goals. Indeed, classic developmental research suggests that before infants reach for objects, they do not see others’ reaches as goal‐directed. In five experiments (
N= 117), we test an alternative hypothesis: Young infants view reaching as undertaken for a purpose but are open‐minded about the specific goals that reaching actions are aimed to achieve. We first show that 3‐month‐old infants, who cannot reach for objects, lack the expectation that observed acts of reaching will be directed to objects rather than to places. Infants at the same age learned rapidly, however, that a specific agent's reaching action was directed either to an object or to a place, after seeing the agent reach for the same object regardless of where it was, or to the same place regardless of what was there. In a further experiment, 3‐month‐old infants did not demonstrate such inferences when they observed an actor engaging in passive movements. Thus, before infants have learned to reach and manipulate objects themselves, they infer that reaching actions are goal‐directed, and they are open to learning that the goal of an action is either an object or a place. Highlights
In the present experiments, 3‐month‐old prereaching infants learned to attribute either object goals or place goals to other people's reaching actions.
Prereaching infants view agents’ actions as goal‐directed, but do not expect these acts to be directed to specific objects, rather than to specific places.
Prereaching infants are open‐minded about the specific goal states that reaching actions aim to achieve.
Identifying people in photographs is a critical task in a wide variety of domains, from national security  to journalism  to human rights investigations . The task is also fundamentally complex and challenging. With the world population at 7.6 billion and growing, the candidate pool is large. Studies of human face recognition ability show that the average person incorrectly identifies two people as similar 20–30% of the time, and trained police detectives do not perform significantly better . Computer vision-based face recognition tools have gained considerable ground and are now widely available commercially, but comparisons to human performance show mixed results at best [2,10,16]. Automated face recognition techniques, while powerful, also have constraints that may be impractical for many real-world contexts. For example, face recognition systems tend to suffer when the target image or reference images have poor quality or resolution, as blemishes or discolorations may be incorrectly recognized as false positives for facial landmarks. Additionally, most face recognition systems ignore some salient facial features, like scars or other skin characteristics, as well as distinctive non-facial features, like ear shape or hair or facial hair styles. This project investigates how we can overcome these limitations to support person identification tasks. By adjusting confidence thresholds, users of face recognition can generally expect high recall (few false negatives) at the cost of low precision (many false positives). Therefore, we focus our work on the “last mile” of person identification, i.e., helping a user find the correct match among a large set of similarlooking candidates suggested by face recognition. Our approach leverages the powerful capabilities of the human vision system and collaborative sensemaking via crowdsourcing to augment the complementary strengths of automatic face recognition. The result is a novel technology pipeline combining collective intelligence and computer vision. We scope this project to focus on identifying soldiers in photos from the American Civil War era (1861– 1865). An estimated 4,000,000 soldiers fought in the war, and most were photographed at least once, due to decreasing costs, the increasing robustness of the format, and the critical events separating friends and family . Over 150 years later, the identities of most of these portraits have been lost, but as museums and archives increasingly digitize and publish their collections online, the pool of reference photos and information has never been more accessible. Historians, genealogists, and collectors work tirelessly to connect names with faces, using largely manual identification methods [3,9]. Identifying people in historical photos is important for preserving material culture , correcting the historical record , and recognizing contributions of marginalized groups , among other reasons.more » « less