Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Understanding how students with varying capabilities think about problem solving can greatly help in improving personalized education which can have significantly better learning outcomes. Here, we present the details of a system we call NeTra that we developed for discovering strategies that students follow in the context of Math learning. Specifically, we developed this system from large-scale data from MATHia that contains millions of student-tutor interactions. The goal of this system is to provide a visual interface for educators to understand the likely strategy the student will follow for problems that students are yet to attempt. This predictive interface can help educators/tutors to develop interventions that are personalized for students. Underlying the system is a powerful AI model based on Neuro-Symbolic learning that has shown promising results in predicting both strategies and the mastery over concepts used in the strategy.Free, publicly-accessible full text available July 1, 2023
-
Free, publicly-accessible full text available July 1, 2023
-
Using archived student data for middle and high school students’ mathematics-focused intelligent tutoring system (ITS) learning collected across a school year, this study explores situational, achievement-goal latent profile membership and the stability of these profiles with respect to student demographics and dispositional achievement goal scores. Over 65% of students changed situational profile membership at some time during the school year. Start-of-year dispositional motivation scores were not related to whether students remained in the same profile across all unit-level measurements. Grade level was predictive of profile stability. Findings from the present study should shed light on how in-the-moment student motivation fluctuates while students are engaged in ITS math learning. Present findings have potential to inform motivation interventions designed for ITS math learning.Free, publicly-accessible full text available July 1, 2023
-
This paper provides an update of the Learner Data Institute (LDI; www.learnerdatainstitute.org) which is now in its third year since conceptualization. Funded as a conceptualization project, the LDI’s first two years had two major goals: (1) develop, implement, evaluate, and refine a framework for data-intensive science and engineering and (2) use the framework to start developing prototype solutions, based on data, data science, and science convergence, to a number of core challenges in learning science and engineering. One major focus in the third, current year is synthesizing efforts from the first two years to identify new opportunities for future research by various mutual interest groups within LDI, which have focused on developing a particular prototype solution to one or more related core challenges in learning science and engineering. In addition to highlighting emerging data-intensive solutions and innovations from the LDI’s first two years, including places where LDI researchers have received additional funding for future research, we highlight here various core challenges our team has identified as being at a “tipping point.” Tipping point challenges are those for which timely investment in data-intensive approaches has the maximum potential for a transformative effect.Free, publicly-accessible full text available July 1, 2023
-
Explaining the results of Machine learning algorithms is crucial given the rapid growth and potential applicability of these methods in critical domains including healthcare, defense, autonomous driving, etc. In this paper, we address this problem in the context of Markov Logic Networks (MLNs) which are highly expressive statistical relational models that combine first-order logic with probabilistic graphical models. MLNs in general are known to be interpretable models, i.e., MLNs can be understood more easily by humans as compared to models learned by approaches such as deep learning. However, at the same time, it is not straightforward to obtain human-understandable explanations specific to an observed inference result (e.g. marginal probability estimate). This is because, the MLN provides a lifted interpretation, one that generalizes to all possible worlds/instantiations, which are not query/evidence specific. In this paper, we extract grounded-explanations, i.e., explanations defined w.r.t specific inference queries and observed evidence. We extract these explanations from importance weights defined over the MLN formulas that encode the contribution of formulas towards the final inference results. We validate our approach in real world problems related to analyzing reviews from Yelp, and show through user-studies that our explanations are richer than state-of-the-art non-relational explainers such as LIMEmore »