This paper introduces a quadratic growth learning trajectory, a series of transitions in students’ ways of thinking (WoT) and ways of understanding (WoU) quadratic growth in response to instructional supports emphasizing change in linked quantities. We studied middle grade (ages 12–13) students’ conceptions during a small-scale teaching experiment aimed at fostering an understanding of quadratic growth as phenomenon of constantly-changing rate of change. We elaborate the duality, necessity, repeated reasoning framework, and methods of creating learning trajectories. We report five WoT: Variation, Early Coordinated Change, Explicitly Quantified Coordinated Change, Dependency Relations of Change, and Correspondence. We also articulate instructional supports that engendered transitions across these WoT: teacher moves, norms, and task design features. Our integration of instructional supports and transitions in students’ WoT extend current research on quadratic function. A visual metaphor is leveraged to discuss the role of learning trajectories research in unifying research on teaching and learning.
more »
« less
Trajectory Comparison in a Vehicular Network I: Computing a Consensus Trajectory
- Award ID(s):
- 1761641
- PAR ID:
- 10131583
- Date Published:
- Journal Name:
- International Conference on Wireless Algorithms, Systems, and Applications
- Page Range / eLocation ID:
- 533-544
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The paper’s abstract in valid LaTeX, without non-standard macros or \cite commands. Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a {\em single trajectory of a Markov Chain.} In particular, we observe a single trajectory X0,…,Xt,… of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M′, or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M′ is O~(n) in the size of the state space n.more » « less
An official website of the United States government

