skip to main content


This content will become publicly available on September 1, 2024

Title: Revisiting Analogical Reasoning in Computing Education: Use of Similarities Between Robot Programming Tasks in Debugging

Analogical reasoning is considered to be a critical cognitive skill in programming. However, it has been rarely studied in a block-based programming context, especially involving both virtual and physical objects. In this multi-case study, we examined how novice programming learners majoring in early childhood education used analogical reasoning while debugging block code to make a robot perform properly. Screen recordings, scaffolding entries, reflections, and block code were analyzed. The cross-case analysis suggested multimodal objects enabled the novice programming learners to identify and use structural relations. The use of a robot eased the verification process by enabling them to test their analogies immediately after the analogy application. Noticing similar functional analogies led to noticing similarities in the relation between block code as well as between block code and the robot, guiding to locate bugs. Implications and directions for future educational computing research are discussed.

 
more » « less
Award ID(s):
1906059 1927595
NSF-PAR ID:
10472063
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Journal of Educational Computing Research
Date Published:
Journal Name:
Journal of Educational Computing Research
Volume:
61
Issue:
5
ISSN:
0735-6331
Page Range / eLocation ID:
1036 to 1063
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Advances in artificial intelligence have raised a basic question about human intelligence: Is human reasoning best emulated by applying task‐specific knowledge acquired from a wealth of prior experience, or is it based on the domain‐general manipulation and comparison of mental representations? We address this question for the case of visual analogical reasoning. Using realistic images of familiar three‐dimensional objects (cars and their parts), we systematically manipulated viewpoints, part relations, and entity properties in visual analogy problems. We compared human performance to that of two recent deep learning models (Siamese Network and Relation Network) that were directly trained to solve these problems and to apply their task‐specific knowledge to analogical reasoning. We also developed a new model using part‐based comparison (PCM) by applying a domain‐general mapping procedure to learned representations of cars and their component parts. Across four‐term analogies (Experiment 1) and open‐ended analogies (Experiment 2), the domain‐general PCM model, but not the task‐specific deep learning models, generated performance similar in key aspects to that of human reasoners. These findings provide evidence that human‐like analogical reasoning is unlikely to be achieved by applying deep learning with big data to a specific type of analogy problem. Rather, humans do (and machines might) achieve analogical reasoning by learning representations that encode structural information useful for multiple tasks, coupled with efficient computation of relational similarity.

     
    more » « less
  2. Abstract

    It is critical to teach all learners to program and think through programming. But to do so requires that early childhood teacher candidates learn to teach computer science. This in turn requires novel pedagogy that can both help such teachers learn the needed skills, but also provide a model for their future teaching. In this study, we examined how early childhood teacher candidates learned to program and debug block-based code with and without scaffolding. We aimed to see how approaches to debugging vary between early childhood teacher candidates who were provided debugging scaffolds during block-based programming and those who were not. This qualitative case study focused on 13 undergraduates majoring in early childhood education. Data sources included video recording during debugging, semi-structured interviews, and (in the case of those who used scaffolding) scaffold responses. Research team members coded data independently and then came to consensus. With hypothesis-driven scaffolds, participants persisted longer. Use of scaffolds enabled the instructor to allow struggle without immediate help for participants. Collaborative reasoning was observed among the scaffolded participants whereas the participants without scaffolds often debugged alone. Regardless of scaffolds, participants often engaged in embodied debugging and also used trial and error. This study provides evidence that one can find success debugging even when engaging in trial and error. This implies that attempting to prevent trial and error may be counterproductive in some contexts. Rather, computer science educators may be advised to promote productive struggle.

     
    more » « less
  3. Abstract

    We examined the relationship between metaphor comprehension and verbal analogical reasoning in young adults who were either typically developing (TD) or diagnosed with Autism Spectrum Disorder (ASD). The ASD sample was highly educated and high in verbal ability, and closely matched to a subset of TD participants on age, gender, educational background, and verbal ability. Additional TD participants with a broader range of abilities were also tested. Each participant solved sets of verbal analogies and metaphors in verification formats, allowing measurement of both accuracy and reaction times. Measures of individual differences in vocabulary, verbal working memory, and autistic traits were also obtained. Accuracy for both the verbal analogy and the metaphor task was very similar across the ASD and matched TD groups. However, reaction times on both tasks were longer for the ASD group. Additionally, stronger correlations between verbal analogical reasoning and working memory capacity in the ASD group indicated that processing verbal analogies was more effortful for them. In the case of both groups, accuracy on the metaphor and analogy tasks was correlated. A mediation analysis revealed that after controlling for working memory capacity, the inter‐task correlation could be accounted for by the mediating variable of vocabulary knowledge, suggesting that the primary common mechanisms linking the two tasks involve language skills.

     
    more » « less
  4. Fitch, T. ; Lamm, C. ; Leder, H. ; Teßmar-Raible, K. (Ed.)
    Is analogical reasoning a task that must be learned to solve from scratch by applying deep learning models to massive numbers of reasoning problems? Or are analogies solved by computing similarities between structured representations of analogs? We address this question by comparing human performance on visual analogies created using images of familiar three-dimensional objects (cars and their subregions) with the performance of alternative computational models. Human reasoners achieved above-chance accuracy for all problem types, but made more errors in several conditions (e.g., when relevant subregions were occluded). We compared human performance to that of two recent deep learning models (Siamese Network and Relation Network) directly trained to solve these analogy problems, as well as to that of a compositional model that assesses relational similarity between part-based representations. The compositional model based on part representations, but not the deep learning models, generated qualitative performance similar to that of human reasoners. 
    more » « less
  5. By middle childhood, humans are able to learn abstract semantic relations (e.g., antonym, synonym, category membership) and use them to reason by analogy. A deep theoretical challenge is to show how such abstract relations can arise from nonrelational inputs, thereby providing key elements of a protosymbolic representation system. We have developed a computational model that exploits the potential synergy between deep learning from “big data” (to create semantic features for individual words) and supervised learning from “small data” (to create representations of semantic relations between words). Given as inputs labeled pairs of lexical representations extracted by deep learning, the model creates augmented representations by remapping features according to the rank of differences between values for the two words in each pair. These augmented representations aid in coping with the feature alignment problem (e.g., matching those features that make “love-hate” an antonym with the different features that make “rich-poor” an antonym). The model extracts weight distributions that are used to estimate the probabilities that new word pairs instantiate each relation, capturing the pattern of human typicality judgments for a broad range of abstract semantic relations. A measure of relational similarity can be derived and used to solve simple verbal analogies with human-level accuracy. Because each acquired relation has a modular representation, basic symbolic operations are enabled (notably, the converse of any learned relation can be formed without additional training). Abstract semantic relations can be induced by bootstrapping from nonrelational inputs, thereby enabling relational generalization and analogical reasoning.

     
    more » « less