Infants begin learning the visual referents of nouns before their first birthday. Despite considerable empirical and theoretical effort, little is known about the statistics of the experiences that enable infants to break into object–name learning. We used wearable sensors to collect infant experiences of visual objects and their heard names for 40 early-learned categories. The analyzed data were from one context that occurs multiple times a day and includes objects with early-learned names: mealtime. The statistics reveal two distinct timescales of experience. At the timescale of many mealtime episodes ( n = 87), the visual categories were pervasively present, but naming of the objects in each of those categories was very rare. At the timescale of single mealtime episodes, names and referents did cooccur, but each name–referent pair appeared in very few of the mealtime episodes. The statistics are consistent with incremental learning of visual categories across many episodes and the rapid learning of name–object mappings within individual episodes. The two timescales are also consistent with a known cortical learning mechanism for one-episode learning of associations: new information, the heard name, is incorporated into well-established memories, the seen object category, when the new information cooccurs with the reactivation of that slowly established memory.
more »
« less
Discourse with few words: Coherence statistics, parent-infant actions on objects, and object names
Early object name learning is often conceptualized as a problem of mapping heard names to referents. However, infants do not hear object names as discrete events but rather in extended interactions organized around goal-directed actions on objects. The present study examined the statistical structure of the nonlinguistic events that surround parent naming of objects. Parents and 12-month-old infants were left alone in a room for 10 minutes with 32 objects available for exploration. Parent and infant handling of objects and parent naming of objects were coded. The four measured statistics were from measures used in the study of coherent discourse: (i) a frequency distribution in which actions were frequently directed to a few objects and more rarely to other objects; (ii) repeated returns to the high-frequency objects over the 10- minute play period; (iii) clustered repetitions and continuity of actions on objects; and (iv) structured networks of transitions among objects in play that connected all the played-with objects. Parent naming was infre- quent but related to the statistics of object-directed actions. The impli- cations of the discourse-like stream of actions are discussed in terms of learning mechanisms that could support rapid learning of object names from relatively few name-object co-occurrences.
more »
« less
- Award ID(s):
- 1842817
- PAR ID:
- 10346413
- Date Published:
- Journal Name:
- Language Acquisition
- ISSN:
- 1048-9223
- Page Range / eLocation ID:
- 1 to 19
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The learning of first object names is deemed a hard problem due to the uncertainty inherent in mapping a heard name to the intended referent in a cluttered and variable world. However, human infants readily solve this problem. Despite considerable theoretical discussion, relatively little is known about the uncertainty infants face in the real world. We used head-mounted eye tracking during parent–infant toy play and quantified the uncertainty by measuring the distribution of infant attention to the potential referents when a parent named both familiar and unfamiliar toy objects. The results show that infant gaze upon hearing an object name is often directed to a single referent which is equally likely to be a wrong competitor or the intended target. This bimodal gaze distribution clarifies and redefines the uncertainty problem and constrains possible solutions.more » « less
-
Picture-naming tasks provide critical data for theories of lexical representation and retrieval and have been performed successfully in sign languages. However, the specific influences of lexical or phonological factors and stimulus properties on sign retrieval are poorly understood. To examine lexical retrieval in American Sign Language (ASL), we conducted a timed picture-naming study using 524 pictures (272 objects and 251 actions). We also compared ASL naming with previous data for spoken English for a subset of 425 pictures. Deaf ASL signers named object pictures faster and more consistently than action pictures, as previously reported for English speakers. Lexical frequency, iconicity, better name agreement, and lower phonological complexity each facilitated naming reaction times (RT)s. RTs were also faster for pictures named with shorter signs (measured by average response duration). Target name agreement was higher for pictures with more iconic and shorter ASL names. The visual complexity of pictures slowed RTs and decreased target name agreement. RTs and target name agreement were correlated for ASL and English, but agreement was lower for ASL, possibly due to the English bias of the pictures. RTs were faster for ASL, which we attributed to a smaller lexicon. Overall, the results suggest that models of lexical retrieval developed for spoken languages can be adopted for signed languages, with the exception that iconicity should be included as a factor. The open-source picture-naming data set for ASL serves as an important, first-of-its-kind resource for researchers, educators, or clinicians for a variety of research, instructional, or assessment purposes.more » « less
-
Abstract Object names are a major component of early vocabularies and learning object names depends on being able to visually recognize objects in the world. However, the fundamental visual challenge of the moment‐to‐moment variations in object appearances that learners must resolve has received little attention in word learning research. Here we provide the first evidence that image‐level object variability matters and may be the link that connects infant object manipulation to vocabulary development. Using head‐mounted eye tracking, the present study objectively measured individual differences in the moment‐to‐moment variability of visual instances of the same object, from infants’ first‐person views. Infants who generated more variable visual object images through manual object manipulation at 15 months of age experienced greater vocabulary growth over the next six months. Elucidating infants’ everyday visual experiences with objects may constitute a crucial missing link in our understanding of the developmental trajectory of object name learning.more » « less
-
Actions in the world elicit data for learning and do so in a stream of interconnected events. Here, we provide evidence on how toddlers with their parent sample information by acting on toys during exploratory play. We observed 10 min of free-flowing and unconstrained object exploration of by toddlers (mean age 21 months) and parents in a room with many available objects ( n = 32). Borrowing concepts and measures from the study of narratives, we found that the toy selections are not a string of unrelated events but exhibit a suite of what we call coherence statistics: Zipfian distributions, burstiness and a network structure. We discuss the transient memory processes that underlie the moment-to-moment toy selections that create this coherence and the role of these statistics in the development of abstract and generalizable systems of knowledge. This article is part of the theme issue ‘Concepts in interaction: social engagement and inner experiences’.more » « less