Children rely on their approximate number system (ANS) to guess quantities from a young age. Studies have shown that older children displayed better ANS performance. However, previous research did not provide an explanation for this ANS improvement. We show that children’s development in ANS is primarily driven by improved attentional control and awareness of peripheral information. Children guess the number of dots on a computer screen while being eye-tracked in our experiment. The behavioral and eye-tracking results provide supporting evidence for our account. Our analysis shows that children estimate better under the longer display-time condition and more visual foveation, with the effect of visual foveation mediating that of time. It also shows that older children make fewer underestimations because they are better at directing their attention and gaze toward areas of interest, and they are also more aware of dots in their peripheral vision. Our finding suggests that the development of children’s ANS is significantly impacted by the development of children’s nonnumerical cognitive abilities.
more »
« less
Deterministic or probabilistic: U.S. children's beliefs about genetic inheritance
Abstract Do children think of genetic inheritance as deterministic or probabilistic? In two novel tasks, children viewed the eye colors of animal parents and judged and selected possible phenotypes of offspring. Across three studies (N = 353, 162 girls, 172 boys, 2 non-binary; 17 did not report gender) with predominantly White U.S. participants collected in 2019–2021, 4- to 12-year-old children showed a probabilistic understanding of genetic inheritance, and they accepted and expected variability in the genetic inheritance of eye color. Children did not show a mother bias but they did show two novel biases: perceptual similarity and sex-matching. These results held for unfamiliar animals and several physical traits (e.g., eye color, ear size, and fin type), and persisted after a lesson.
more »
« less
- Award ID(s):
- 1760940
- PAR ID:
- 10658620
- Publisher / Repository:
- Oxford University Press
- Date Published:
- Journal Name:
- Child Development
- Volume:
- 95
- Issue:
- 3
- ISSN:
- 0009-3920
- Format(s):
- Medium: X Size: p. e186-e205
- Size(s):
- p. e186-e205
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Objects and places are foundational spatial domains represented in human symbolic expressions, like drawings, which show a prioritization of depicting small-scale object-shape information over the large-scale navigable place information in which objects are situated. Is there a similar object-over-place bias in language? Across six experiments, adults and 3- to 4-year-old children were asked either to extend a novel noun in a labeling phrase, to extend a novel noun in a prepositional phrase, or to simply match pictures. To dissociate specific object and place information from more general figure and ground information, participants either saw scenes with both place information (a room) and object information (a block in the room), or scenes with two kinds of object information that matched the figure-ground relations of the room and block by presenting an open container with a smaller block inside. While adults showed a specific object-over-place bias in both extending novel noun labels and matching, they did not show this bias in extending novel nouns following prepositions. Young children showed this bias in extending novel noun labels only. Spatial domains may thus confer specific and foundational biases for word learning that may change through development in a way that is similar to that of other word-learning biases about objects, like the shape bias. These results expand the symbolic scope of prior studies on object biases in drawing to object biases in language, and they expand the spatial domains of prior studies characterizing the language of objects and places.more » « less
-
Abstract This study used eye-tracking to examine whether extraneous illustration details—a common design in beginning reader storybooks—promote attentional competition and hinder learning. The study used a within-subject design with first- and second-grade children. Children (n = 60) read a story in a commercially available Standard condition and in a Streamlined condition, in which extraneous illustrations were removed while an eye-tracker recorded children’s gaze shifts away from the text, fixations to extraneous illustrations, and fixations to relevant illustrations. Extraneous illustrations promoted attentional competition and hindered reading comprehension: children made more gaze shifts away from text in the Standard compared to the Streamlined condition, and reading comprehension was significantly higher in the Streamlined condition compared to the Standard condition. Importantly, fixations toward extraneous details accounted for the unique variance in reading comprehension controlling for reading proficiency and attending to relevant illustrations. Furthermore, a follow-up control experiment (n = 60) revealed that these effects did not solely stem from enhanced text saliency in the Streamlined condition and reproduced the finding of a negative relationship between fixations to extraneous details and reading comprehension. This study provides evidence that the design of reading materials can be optimized to promote literacy development in young children.more » « less
-
Abstract When making inferences about the mental lives of others (e.g., others’ preferences), it is critical to consider the extent to which the choices we observe are constrained. Prior research on the development of this tendency indicates a contradictory pattern: Children show remarkable sensitivity to constraints in traditional experimental paradigms, yet often fail to consider real‐world constraints and privilege inherent causes instead. We propose that one explanation for this discrepancy may be that real‐world constraints are often stable over time and lose their salience. The present research tested whether children (N = 133, 5‐ to 12‐year‐old mostly US children; 55% female, 45% male) becomelesssensitive to an actor's constraints after first observing two constrained actors (Stable condition) versus after first observing two actors in contexts with greater choice (Not Stable condition). We crossed thestabilityof the constraint with thetypeof constraint: either the constraint was deterministic such that there was only one option available (No Other Option constraint) or, in line with many real‐world constraints, the constraint was probabilistic such that therewasanother option, but it was difficult to access (Hard to Access constraint). Results indicated that children in the Stable condition became less sensitive to the probabilistic Hard to Access constraint across trials. Notably, we also found that children's sensitivity to constraints was enhanced in the Not Stable condition regardless of whether the constraint was probabilistic or deterministic. We discuss implications for children's sensitivity to real‐world constraints. Research HighlightsThis research addresses the apparent contradiction that children are sensitive to constraints in experimental paradigms but are ofteninsensitiveto constraints in the real world.One explanation for this discrepancy is that constraints in the real world tend to be stable over time and may lose their salience.When probabilistic constraints (i.e., when a second option is available but hard to access) are stable, children become de‐sensitized to constraints across trials.First observing contexts with greater choice increases children's sensitivity to both probabilistic and deterministic constraints.more » « less
-
Abstract Teaching a new concept through gestures—hand movements that accompany speech—facilitates learning above‐and‐beyond instruction through speech alone (e.g., Singer & Goldin‐Meadow,). However, the mechanisms underlying this phenomenon are still under investigation. Here, we use eye tracking to explore one often proposed mechanism—gesture's ability to direct visual attention. Behaviorally, we replicate previous findings: Children perform significantly better on a posttest after learning through Speech+Gesture instruction than through Speech Alone instruction. Using eye tracking measures, we show that children who watch a math lesson with gesturedoallocate their visual attention differently from children who watch a math lesson without gesture—they look more to the problem being explained, less to the instructor, and are more likely to synchronize their visual attention with information presented in the instructor's speech (i.e.,follow along with speech) than children who watch the no‐gesture lesson. The striking finding is that, even though these looking patterns positively predict learning outcomes, the patterns do notmediatethe effects of training condition (Speech Alone vs. Speech+Gesture) on posttest success. We find instead a complex relation between gesture and visual attention in which gesturemoderatesthe impact of visual looking patterns on learning—following along with speechpredicts learning for children in the Speech+Gesture condition, but not for children in the Speech Alone condition. Gesture's beneficial effects on learning thus come not merely from its ability to guide visual attention, but also from its ability to synchronize with speech and affect what learners glean from that speech.more » « less
An official website of the United States government
