skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: All contexts are not created equal: Social stimuli win the competition for organizing reinforcement learning in 9‐month‐old infants
Abstract Previous work has shown that infants as young as 8 months of age can use certain features of the environment, such as the shape or color of visual stimuli, as cues to organize simple inputs into hierarchical rule structures, a robust form of reinforcement learning that supports generalization of prior learning to new contexts. However, especially in cluttered naturalistic environments, there are an abundance of potential cues that can be used to structure learning into hierarchical rule structures. It is unclear how infants determine what features constitute a higher‐order context to organize inputs into hierarchical rule structures. Here, we examine whether 9‐month‐old infants are biased to use social stimuli, relative to non‐social stimuli, as a higher‐order context to organize learning of simple visuospatial inputs into hierarchical rule sets. Infants were presented with four face/color‐target location pairings, which could be learned most simply as individual associations. Alternatively, infants could use the faces or colorful backgrounds as a higher‐order context to organize the inputs into simpler color‐location or face‐location rules, respectively. Infants were then given a generalization test designed to probehowthey learned the initial pairings. The results indicated that infants appeared to use the faces as a higher‐order context to organize simpler color‐location rules, which then supported generalization of learning to new face contexts. These findings provide new evidence that infants are biased to organize reinforcement learning around social stimuli.  more » « less
Award ID(s):
2051819
PAR ID:
10449788
Author(s) / Creator(s):
 ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Developmental Science
Volume:
24
Issue:
5
ISSN:
1363-755X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Humans detect faces efficiently from a young age. Face detection is critical for infants to identify and learn from relevant social stimuli in their environments. Faces with eye contact are an especially salient stimulus, and attention to the eyes in infancy is linked to the emergence of later sociality. Despite the importance of both of these early social skills—attending to faces and attending to the eyes—surprisingly little is known about how they interact. We used eye tracking to explore whether eye contact influences infants' face detection. Longitudinally, we examined 2‐, 4‐, and 6‐month‐olds' (N = 65) visual scanning of complex image arrays with human and animal faces varying in eye contact and head orientation. Across all ages, infants displayed superior detection of faces with eye contact; however, this effect varied as a function of species and head orientation. Infants were more attentive to human than animal faces and were more sensitive to eye and head orientation for human faces compared to animal faces. Unexpectedly, human faces with both averted heads and eyes received the most attention. This pattern may reflect the early emergence of gaze following—the ability to look where another individual looks—which begins to develop around this age. Infants may be especially interested in averted gaze faces, providing early scaffolding for joint attention. This study represents the first investigation to document infants' attention patterns to faces systematically varying in their attentional states. Together, these findings suggest that infants develop early, specialized functional conspecific face detection. 
    more » « less
  2. Abstract Infants readily extract linguistic rules from speech. Here, we ask whether this advantage extends to linguistic stimuli that do not rely on the spoken modality. To address this question, we first examine whether infants can differentially learn rules from linguistic signs. We show that, despite having no previous experience with a sign language, six-month-old infants can extract the reduplicative rule (AA) from dynamic linguistic signs, and the neural response to reduplicative linguistic signs differs from reduplicative visual controls, matched for the dynamic spatiotemporal properties of signs. We next demonstrate that the brain response for reduplicative signs is similar to the response to reduplicative speech stimuli. Rule learning, then, apparently depends on the linguistic status of the stimulus, not its sensory modality. These results suggest that infants are language-ready. They possess a powerful rule system that is differentially engaged by all linguistic stimuli, speech or sign. 
    more » « less
  3. Abstract OpenFlow-compliant commodity switches face challenges in efficiently managing flow rules due to the limited capacity of expensive high-speed memories used to store them. The accumulation of inactive flows can disrupt ongoing communication, necessitating an optimized approach to flow rule timeouts. This paper proposes Delayed Dynamic Timeout (DDT), a Reinforcement Learning-based approach to dynamically adjust flow rule timeouts and enhance the utilization of a switch’s flow table(s) for improved efficiency. Despite the dynamic nature of network traffic, our DDT algorithm leverages advancements in Reinforcement Learning algorithms to adapt and achieve flow-specific optimization objectives. The evaluation results demonstrate that DDT outperforms static timeout values in terms of both flow rule match rate and flow rule activity. By continuously adapting to changing network conditions, DDT showcases the potential of Reinforcement Learning algorithms to effectively optimize flow rule management. This research contributes to the advancement of flow rule optimization techniques and highlights the feasibility of applying Reinforcement Learning in the context of SDN. 
    more » « less
  4. Learning logical rules is critical to improving reasoning in KGs. This is due to their ability to provide logical and interpretable explanations when used for predictions, as well as their ability to generalize to other tasks, domains, and data. While recent methods have been proposed to learn logical rules, the majority of these methods are either restricted by their computational complexity and cannot handle the large search space of large-scale KGs, or show poor generalization when exposed to data outside the training set. In this paper, we propose an endto-end neural model for learning compositional logical rules called NCRL. NCRL detects the best compositional structure of a rule body, and breaks it into small compositions in order to infer the rule head. By recurrently merging compositions in the rule body with a recurrent attention unit, NCRL finally predicts a single rule head. Experimental results show that NCRL learns high-quality rules, as well as being generalizable. Specifically, we show that NCRL is scalable, efficient, and yields state-of-the-art results for knowledge graph completion on large-scale KGs. Moreover, we test NCRL for systematic generalization by learning to reason on small-scale observed graphs and evaluating on larger unseen ones. 
    more » « less
  5. Natural target functions and tasks typically exhibit hierarchical modularity – they can be broken down into simpler sub-functions that are organized in a hierarchy. Such sub-functions have two important features: they have a distinct set of inputs (input-separability) and they are reused as inputs higher in the hierarchy (reusability). Previous studies have established that hierarchically modular neural networks, which are inherently sparse, offer benefits such as learning efficiency, generalization, multi-task learning, and transfer. However, identifying the underlying sub-functions and their hierarchical structure for a given task can be challenging. The high-level question in this work is: if we learn a task using a sufficiently deep neural network, how can we uncover the underlying hierarchy of sub-functions in that task? As a starting point, we examine the domain of Boolean functions, where it is easier to determine whether a task is hierarchically modular. We propose an approach based on iterative unit and edge pruning (during training), combined with network analysis for module detection and hierarchy inference. Finally, we demonstrate that this method can uncover the hierarchical modularity of a wide range of Boolean functions and two vision tasks based on the MNIST digits dataset. 
    more » « less