The planetary model of the atom is alive and well in middle school science class—and in popular iconography—despite most educated adults’ awareness of its shortcomings. The model persists because it is easily visualized, intuitively understandable, and expresses important truths. Models don’t have to get everything right to be useful. Middle schoolers would be overwhelmed by a more correct description of electron orbitals as probability densities satisfying the Schrödinger equation. Better to just show orbitals as ellipses. In the current era, when immensely powerful AI technologies built on neural networks are rapidly disrupting the world, K-12 students need age-appropriate models of neural networks just as they need age-appropriate models of atoms. We suggest the linear threshold unit as the best model for introducing middle school students to neural computation, and we present an interactive tool, Neuron Sandbox, that facilitates their learning.
more »
« less
This content will become publicly available on April 11, 2026
Learning to Think like a Neuron in Middle School
Neuron Sandbox is a browser-based tool that helps middle school students grasp basic principles of neural computation. It simulates a linear threshold unit applied to binary decision problems, which students solve by adjusting the unit's threshold and/or weights. Although Neuron Sandbox provides extensive visualization aids, solving these problems is challenging for students who have not yet been exposed to algebra. We collected survey, video, and worksheet data from 21 seventh grade students in two sections of an AI elective, taught by the same teacher, that used Neuron Sandbox. We present a scaffolding strategy that proved effective at guiding these students to achieve mastery of these problems. While the amount of scaffolding required was more than we originally anticipated, by the end of the exercise students understood the computation that linear threshold units perform and were able to generalize their understanding of the worksheet’s solve for threshold strategy to also solve for weights.
more »
« less
- PAR ID:
- 10589509
- Publisher / Repository:
- Association for the Advancement of Artificial Intelligence
- Date Published:
- Journal Name:
- Proceedings of the AAAI Conference on Artificial Intelligence
- Volume:
- 39
- Issue:
- 28
- ISSN:
- 2159-5399
- Page Range / eLocation ID:
- 29212 to 29219
- Subject(s) / Keyword(s):
- neural networks artificial intelligence middle school
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Novice programmers need to write basic code as part of the learning process, but they often face difficulties. To assist struggling students, we recently implemented personalized Parsons problems, which are code puzzles where students arrange blocks of code to solve them, as pop-up scaffolding. Students found them to be more engaging and preferred them for learning, instead of simply receiving the correct answer, such as the response they might get from generative AI tools like ChatGPT. However, a drawback of using Parsons problems as scaffolding is that students may be able to put the code blocks in the correct order without fully understanding the rationale of the correct solution. As a result, the learning benefits of scaffolding are compromised. Can we improve the understanding of personalized Parsons scaffolding by providing textual code explanations? In this poster, we propose a design that incorporates multiple levels of textual explanations for the Parsons problems. This design will be used for future technical evaluations and classroom experiments. These experiments will explore the effectiveness of adding textual explanations to Parsons problems to improve instructional benefits.more » « less
-
Culbertson, J.; Perfors, A.; Rabagliati, H.; Ramenzoni, V. (Ed.)Integrating visual representations in an interactive learning activity effectively scaffolds performance and learning. However, it is unclear whether and how sustaining or interleaving visual scaffolding helps learners solve problems efficiently and learn from problem solving. We conducted a classroom study with 63 middle-school students in which we tested whether sustaining or interleaving a particular form of visual scaffolding, called anticipatory diagrammatic self-explanation in an Intelligent Tutoring System, helps students’ learning and performance in the domain of early algebra. Sustaining visual scaffolding during problem solving helped students solve problems efficiently with no negative effects on learning. However, in-depth log data analyses suggest that interleaving visual scaffolding allowed students to practice important skills that may help them in later phases of algebra learning. This paper extends scientific understanding that sustaining visual scaffold does not over-scaffold student learning in the early phase of skill acquisition in algebra.more » « less
-
Culbertson, J.; Perfors, A.; Rabagliati, H.; Ramenzoni, V. (Ed.)Integrating visual representations in an interactive learning activity effectively scaffolds performance and learning. However, it is unclear whether and how sustaining or interleaving visual scaffolding helps learners solve problems efficiently and learn from problem solving. We conducted a classroom study with 63 middle-school students in which we tested whether sustaining or interleaving a particular form of visual scaffolding, called anticipatory diagrammatic self-explanation in an Intelligent Tutoring System, helps students’ learning and performance in the domain of early algebra. Sustaining visual scaffolding during problem solving helped students solve problems efficiently with no negative effects on learning. However, in-depth log data analyses suggest that interleaving visual scaffolding allowed students to practice important skills that may help them in later phases of algebra learning. This paper extends scientific understanding that sustaining visual scaffold does not over-scaffold student learning in the early phase of skill acquisition in algebra.more » « less
-
We propose a novel three-layer neural network architecture with threshold activations for tabular data classification problems. The hidden layer units correspond to trainable neurons with arbitrary weights and biases and a step activation. These neurons are logically equivalent to threshold logic functions. The output layer neuron is also a threshold function that implements a conjunction of the hidden layer threshold functions. This neural network architecture can leverage state-of-the-art network training methods to achieve high prediction accuracy, and the network is designed so that minimal human understandable explanations can be readily derived from the model. Further, we employ a sparsity-promoting regularization approach to sparsify the threshold functions to simplify them, and to sparsify the output neuron so that it only depends on a small subset of hidden layer threshold functions. Experimental results show that our approach outperforms other state-of-the-art interpretable decision models in prediction accuracy.more » « less
An official website of the United States government
