skip to main content


Title: Student reasoning about the least-squares problem in inquiry-oriented linear algebra
The method of Least Square Approximation is an important topic in some linear algebra classes. Despite this, little is known about how students come to understand it, particularly in a Realistic Mathematics Education setting. Here, we report on how students used literal symbols and equations when solving a least squares problem in a travel scenario, as well as their reflections on the least squares equation in an open-ended written question. We found students used unknowns and parameters in a variety of ways. We highlight how their use of dot product equations can be helpful towards supporting their understanding of the least squares equation.  more » « less
Award ID(s):
1914793
NSF-PAR ID:
10465464
Author(s) / Creator(s):
; ; ; ;
Editor(s):
Cook, S.; Katz, B.; Moore-Russo, D.
Date Published:
Journal Name:
Proceedings of the Annual Conference on Research in Undergraduate Mathematics Education
ISSN:
2474-9346
Page Range / eLocation ID:
643-651
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Cook, S. ; Katz, B. ; Moore-Russo, D. (Ed.)
    The method of Least Square Approximation is an important topic in some linear algebra classes. Despite this, little is known about how students come to understand it, particularly in a Realistic Mathematics Education setting. Here, we report on how students used literal symbols and equations when solving a least squares problem in a travel scenario, as well as their reflections on the least squares equation in an open-ended written question. We found students used unknowns and parameters in a variety of ways. We highlight how their use of dot product equations can be helpful towards supporting their understanding of the least squares equation. 
    more » « less
  2. Cook, S. ; Katz, B. ; Moore-Russo, D. (Ed.)
    The method of Least Square Approximation is an important topic in some linear algebra classes. Despite this, little is known about how students come to understand it, particularly in a Realistic Mathematics Education setting. Here, we report on how students used literal symbols and equations when solving a least squares problem in a travel scenario, as well as their reflections on the least squares equation in an open-ended written question. We found students used unknowns and parameters in a variety of ways. We highlight how their use of dot product equations can be helpful towards supporting their understanding of the least squares equation. 
    more » « less
  3. Network tomography aims at estimating source-destination traffic rates from link traffic measurements. This inverse problem was formulated by Vardi in 1996 for independent Poisson traffic over networks operating under deterministic as well as random routing regimes. Vardi used a second-order moment matching approach to estimate the rates where a solution for the resulting linear matrix equation was obtained using an iterative minimum I-divergence procedure. Vardi’s second-order moment matching approach was recently extended to higher order cumulant matching approach with the goal of improving the rank of the system of linear equations. In this paper we go one step further and develop a moment generating function matching approach for rate estimation, and seek a least squares as well as an iterative minimum I-divergence solution of the resulting linear equations. We also specialize this approach to a characteristic function matching approach which exhibits some advantages. These follow from the fact that the characteristic function matching approach results in fewer conflicting equations involving the empirical estimates. We demonstrate that the new approach outperforms the cumulant matching approach while being conceptually simpler. 
    more » « less
  4. Discovering governing physical laws from noisy data is a grand challenge in many science and engineering research areas. We present a new approach to data-driven discovery of ordinary differential equations (ODEs) and partial differential equations (PDEs), in explicit or implicit form. We demonstrate our approach on a wide range of problems, including shallow water equations and Navier–Stokes equations. The key idea is to select candidate terms for the underlying equations using dimensional analysis, and to approximate the weights of the terms with error bars using our threshold sparse Bayesian regression. This new algorithm employs Bayesian inference to tune the hyperparameters automatically. Our approach is effective, robust and able to quantify uncertainties by providing an error bar for each discovered candidate equation. The effectiveness of our algorithm is demonstrated through a collection of classical ODEs and PDEs. Numerical experiments demonstrate the robustness of our algorithm with respect to noisy data and its ability to discover various candidate equations with error bars that represent the quantified uncertainties. Detailed comparisons with the sequential threshold least-squares algorithm and the lasso algorithm are studied from noisy time-series measurements and indicate that the proposed method provides more robust and accurate results. In addition, the data-driven prediction of dynamics with error bars using discovered governing physical laws is more accurate and robust than classical polynomial regressions. 
    more » « less
  5. The CP tensor decomposition is used in applications such as machine learning and signal processing to discover latent low-rank structure in multidimensional data. Computing a CP decomposition via an alternating least squares (ALS) method reduces the problem to several linear least squares problems. The standard way to solve these linear least squares subproblems is to use the normal equations, which inherit special tensor structure that can be exploited for computational efficiency. However, the normal equations are sensitive to numerical ill-conditioning, which can compromise the results of the decomposition. In this paper, we develop versions of the CP-ALS algorithm using the QR decomposition and the singular value decomposition, which are more numerically stable than the normal equations, to solve the linear least squares problems. Our algorithms utilize the tensor structure of the CP-ALS subproblems efficiently, have the same complexity as the standard CP-ALS algorithm when the input is dense and the rank is small, and are shown via examples to produce more stable results when ill-conditioning is present. Our MATLAB implementation achieves the same running time as the standard algorithm for small ranks, and we show that the new methods can obtain lower approximation error. 
    more » « less