If we want to design for computer science learning in K12 education, then Kindergarten is the place to start. Despite differing formats, early childhood coding tools rely heavily on a similar representational infrastructure. Grids and arrows predominate, introducing a nested series of directional symbols and spatial skills children must learn in order to code. Thus, learning to think computationally is entangled with learning to think spatially and symbolically. This paper analyzes video of Kindergarten students learning to use robot coding toys and examines how they navigated programming’s representational infrastructure. We found that children drew on conventional notions of how objects move, creating a “conceptual blend” for programming robot routes (Fauconnier & Turner, 1998). We argue that coding in Kindergarten requires mapping a series of correspondences from the domain of everyday movements onto the resources available in the representational domain.
more »
« less
Which Way is Up? Orientation and Young Children’s Directional Arrow Interpretations in Coding Contexts
Many coding environments for young children involve using navigational arrow codes representing four movements: forward, backwards, rotate left, and rotate right. Children interpreting these four, seemingly simple codes encounter a complex interaction of spatial thinking and semantic meaning. In this study of how children interpret directional arrows, we found that they interpret each of the arrows as encoding many meanings and that the orientation of the agent plays a critical role in children’s interpretations. Through iterative rounds of qualitative coding and drawing on two examples, we unpack some common interpretations.
more »
« less
- Award ID(s):
- 1842116
- PAR ID:
- 10438348
- Date Published:
- Journal Name:
- International Society of the Learning Sciences
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Ruis, Andrew; Lee, Seung B. (Ed.)When text datasets are very large, manually coding line by line becomes impractical. As a result, researchers sometimes try to use machine learning algorithms to automatically code text data. One of the most popular algorithms is topic modeling. For a given text dataset, a topic model provides probability distributions of words for a set of “topics” in the data, which researchers then use to interpret meaning of the topics. A topic model also gives each document in the dataset a score for each topic, which can be used as a non-binary coding for what proportion of a topic is in the document. Unfortunately, it is often difficult to interpret what the topics mean in a defensible way, or to validate document topic proportion scores as meaningful codes. In this study, we examine how keywords from codes developed by human experts were distributed in topics generated from topic modeling. The results show that (1) top keywords of a single topic often contain words from multiple human-generated codes; and conversely, (2) words from human-generated codes appear as high-probability keywords in multiple topic. These results explain why directly using topics from topic models as codes is problematic. However, they also imply that topic modeling makes it possible for researchers to discover codes from short word lists.more » « less
-
It is well-known that children rapidly learn words, following a range of heuristics. What is less well appreciated is that – because most words are polysemous and have multiple meanings (e.g., ‘glass’ can label a material and drinking vessel) – children will often be learning a new meaning for a known word, rather than an entirely new word. Across four experiments we show that children flexibly adapt a well-known heuristic – the shape bias – when learning polysemous words. Consistent with previous studies, we find that children and adults preferentially extend a new object label to other objects of the same shape. But we also find that when a new word for an object (‘a gup’) has previously been used to label the material composing that object (‘some gup’), children and adults override the shape bias, and are more likely to extend the object label by material (Experiments 1 and 3). Further, we find that, just as an older meaning of a polysemous word constrains interpretations of a new word meaning, encountering a new word meaning leads learners to update their interpretations of an older meaning (Experiment 2). Finally, we find that these effects only arise when learners can perceive that a word’s meanings are related, not when they are arbitrarily paired (Experiment 4). Together, these findings show that children can exploit cues from polysemy to infer how new word meanings should be extended, suggesting that polysemy may facilitate word learning and invite children to construe categories in new ways.more » « less
-
Abstract Qualitative coding, or content analysis, is more than just labeling text: it is a reflexive interpretive practice that shapes research questions, refines theoretical insights, and illuminates subtle social dynamics. As large language models (LLMs) become increasingly adept at nuanced language tasks, questions arise about whether—and how—they can assist in large-scale coding without eroding the interpretive depth that distinguishes qualitative analysis from traditional machine learning and other quantitative approaches to natural language processing. In this paper, we present a hybrid approach that preserves hermeneutic value while incorporating LLMs to scale the application of codes to large data sets that are impractical for manual coding. Our workflow retains the traditional cycle of codebook development and refinement, adding an iterative step to adapt definitions for machine comprehension, before ultimately replacing manual with automated text categorization. We demonstrate how to rewrite code descriptions for LLM-interpretation, as well as how structured prompts and prompting the model to explain its coding decisions (chain-of-thought) can substantially improve fidelity. Empirically, our case study of socio-historical codes highlights the promise of frontier AI language models to reliably interpret paragraph-long passages representative of a humanistic study. Throughout, we emphasize ethical and practical considerations, preserving space for critical reflection, and the ongoing need for human researchers’ interpretive leadership. These strategies can guide both traditional and computational scholars aiming to harness automation effectively and responsibly—maintaining the creative, reflexive rigor of qualitative coding while capitalizing on the efficiency afforded by LLMs.more » « less
-
The goals of the current study were: 1) to modify and expand an existing spatial mathematical language coding system to include quantitative mathematical language terms and 2) to examine the extent to which preschool-aged children used spatial and quantitative mathematical language during a block play intervention. Participants included 24 preschool-aged children (Age M = 57.35 months) who were assigned to a block play intervention. Children participated in up to 14 sessions of 15-to-20-minute block play across seven weeks. Results demonstrated that spatial mathematical language terms were used with a higher raw frequency than quantitative mathematical language terms during the intervention sessions. However, once weighted frequencies were calculated to account for the number of codes in each category, spatial language was only used slightly more than quantitative language during block play. Similar patterns emerged between domains within the spatial and quantitative language categories. These findings suggest that both quantitative and spatial mathematical language usage should be evaluated when considering whether child activities can improve mathematical learning and spatial performance. Further, accounting for the number of codes within categories provided a more representative presentation of how mathematical language was used versus solely utilizing raw word counts. Implications for future research are discussed.more » « less
An official website of the United States government

