This content will become publicly available on November 5, 2024
Does knowledge of other people's minds grow from concrete experience to abstract concepts? Cognitive scientists have hypothesized that infants’ first‐person experience, acting on their own goals, leads them to understand others’ actions and goals. Indeed, classic developmental research suggests that before infants reach for objects, they do not see others’ reaches as goal‐directed. In five experiments (
In the present experiments, 3‐month‐old prereaching infants learned to attribute either object goals or place goals to other people's reaching actions. Prereaching infants view agents’ actions as goal‐directed, but do not expect these acts to be directed to specific objects, rather than to specific places. Prereaching infants are open‐minded about the specific goal states that reaching actions aim to achieve.
In the present experiments, 3‐month‐old prereaching infants learned to attribute either object goals or place goals to other people's reaching actions.
Prereaching infants view agents’ actions as goal‐directed, but do not expect these acts to be directed to specific objects, rather than to specific places.
Prereaching infants are open‐minded about the specific goal states that reaching actions aim to achieve.
- NSF-PAR ID:
- Publisher / Repository:
- Date Published:
- Journal Name:
- Developmental Science
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
Early object name learning is often conceptualized as a problem of mapping heard names to referents. However, infants do not hear object names as discrete events but rather in extended interactions organized around goal-directed actions on objects. The present study examined the statistical structure of the nonlinguistic events that surround parent naming of objects. Parents and 12-month-old infants were left alone in a room for 10 minutes with 32 objects available for exploration. Parent and infant handling of objects and parent naming of objects were coded. The four measured statistics were from measures used in the study of coherent discourse: (i) a frequency distribution in which actions were frequently directed to a few objects and more rarely to other objects; (ii) repeated returns to the high-frequency objects over the 10- minute play period; (iii) clustered repetitions and continuity of actions on objects; and (iv) structured networks of transitions among objects in play that connected all the played-with objects. Parent naming was infre- quent but related to the statistics of object-directed actions. The impli- cations of the discourse-like stream of actions are discussed in terms of learning mechanisms that could support rapid learning of object names from relatively few name-object co-occurrences.more » « less
Most research on early language learning focuses on the objects that infants see and the words they hear in their daily lives, although growing evidence suggests that motor development is also closely tied to language development. To study the real‐time behaviors required for learning new words during free‐flowing toy play, we measured infants’ visual attention and manual actions on to‐be‐learned toys. Parents and 12‐to‐26‐month‐old infants wore wireless head‐mounted eye trackers, allowing them to move freely around a home‐like lab environment. After the play session, infants were tested on their knowledge of object‐label mappings. We found that how often parents named objects during play did not predict learning, but instead, it was infants’ attention during and around a labeling utterance that predicted whether an object‐label mapping was learned. More specifically, we found that infant visual attention alone did not predict word learning. Instead, coordinated, multimodal attention–when infants’ hands and eyes were attending to the same object–predicted word learning. Our results implicate a causal pathway through which infants’ bodily actions play a critical role in early word learning.
Infant language learning depends on the distribution of co‐occurrences
withinlanguage–between words and other words–and betweenlanguage content and events in the world. Yet infant‐directed speech is not limited to words that refer to perceivable objects and actions. Rather, caregivers’ utterances contain a range of syntactic forms and expressions with diverse attentional, regulatory, social, and referential functions. We conducted a distributional analysis of linguistic content types at the utterance level, and demonstrated that a wide range of content types in maternal speech can be distinguished by their distribution in sequences of utterances and by their patterns of co‐occurrence with infants’ actions. We observed free‐play sessions of 38 12‐month‐old infants and their mothers, annotated maternal utterances for 10 content types, and coded infants’ gaze target and object handling. Results show that all content types tended to repeat in consecutive utterances, whereas preferred transitions between different content types reflected sequences from attention‐capturing to directing and then descriptive utterances. Specific content types were associated with infants’ engagement with objects (declaratives, descriptions, object names), with disengagement from objects (talk about attention, infant's name), and with infants’ gaze at the mother (affirmations). We discuss how structured discourse might facilitate language acquisition by making speech input more predictable and/or by providing clues about high‐level form‐function mappings.
Prior studies found that hand preference trajectories predict preschool language outcomes. However, this approach has been limited to examining bimanual manipulation in toddlers. It is not known whether hand preference during infancy for acquiring objects (i.e., reach‐to‐grasp) similarly predicts childhood language ability. The current study explored this motor‐language developmental cascade in 90 children. Hand preference for acquiring objects was assessed monthly from 6 to 14 months, and language skill was assessed at 5 years. Latent class growth analysis identified three infant hand preference classes: left, early right and late right. Infant hand preference classes predicted 5‐year language skills. Children in the left and early right classes, who were categorized as having a consistent hand preference, had higher expressive and receptive language scores relative to children in the inconsistent late right class. Consistent classes did not differ from each other on language outcomes. Infant hand preference patterns explained more variance for expressive and receptive language relative to previously reported toddler hand preference patterns, above and beyond socio‐economic status. Results suggest that hand preference, measured at different time points across development using a trajectory approach, is reliably linked to later language.
Hand preference trajectories reliably predict preschool language above and beyond SES.
Infants with a consistent hand preference for reaching had greater language skills at 5 years.
Infant hand preference explained more variance in language than toddler hand preference.
When human adults make decisions (e.g., wearing a seat belt), we often consider the negative consequences that would ensue if our actions were to fail, even if we have never experienced such a failure. Do the same considerations guide our understanding of other people's decisions? In this paper, we investigated whether adults, who have many years of experience making such decisions, and 6‐ and 7‐year‐old children, who have less experience and are demonstrably worse at judging the consequences of their own actions, conceive others' actions as motivated both by reward (how good reaching one's intended goal would be), and by what we call “danger” (how badly one's action could end). In two pre‐registered experiments, we tested whether adults and 6‐ and 7‐year‐old children tailor their predictions and explanations of an agent's action choices to the specific degree of danger and reward entailed by each action. Across four different tasks, we found that children and adults expected others to negatively appraise dangerous situations and minimize the danger of their actions. Children's and adults' judgments varied systematically in accord with both the degree of danger the agent faced and the value the agent placed on the goal state it aimed to achieve. However, children did not calibrate their inferences about
how muchan agent valued the goal state of a successful action in accord with the degree of danger the action entailed, and adults calibrated these inferences more weakly than inferences concerning the agent's future action choices. These results suggest that from childhood, people use a degree of danger and reward to make quantitative, fine‐grained explanations and predictions about other people's behavior, consistent with computational models on theory of mind that contain continuous representations of other agents' action plans.