The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM ET on Friday, February 6 until 10:00 AM ET on Saturday, February 7 due to maintenance. We apologize for the inconvenience.
Explore Research Products in the PAR It may take a few hours for recently added research products to appear in PAR search results.
Title: Community college information technology education: Curriculum mapping, a learning science framework, and AI learning technologies.
Abstract. Most jobs in the digital economy require 4-year university degrees, excluding many community college students. To help these students join the digital economy, our project team is developing AI-based learning technology using a novel approach. First, we employ curriculum mapping to analyze courses and identify knowledge components (KCs) that are positioned to impact student outcomes. We triangulate our results using student learning data and expert-provided qualitative assessment. We then employ the Knowledge, Learning and Instruction framework to align KCs with individual tutoring and collaborative learning. This analysis is guiding us in developing intelligent tutors and collaborative learning technology, empirically-tested forms of AI-based learning technology, to support IT students. In this paper, we describe our innovative approach and results thus far. more »« less
Students use learning analytics systems to make day-to-day learning decisions, but may not understand their potential flaws. This work delves into student understanding of an example learning analytics algorithm, Bayesian Knowledge Tracing (BKT), using Cognitive Task Analysis (CTA) to identify knowledge components (KCs) comprising expert student understanding. We built an interactive explanation to target these KCs and performed a controlled experiment examining how varying the transparency of limitations of BKT impacts understanding and trust. Our results show that, counterintuitively, providing some information on the algorithm’s limitations is not always better than providing no information. The success of the methods from our BKT study suggests avenues for the use of CTA in systematically building evidence-based explanations to increase end user understanding of other complex AI algorithms in learning analytics as well as other domains.
Students use learning analytics systems to make day-to-day learning decisions, but may not understand their potential flaws. This work delves into student understanding of an example learning analytics algorithm, Bayesian Knowledge Tracing (BKT), using Cognitive Task Analysis (CTA) to identify knowledge components (KCs) comprising expert student understanding. We built an interactive explanation to target these KCs and performed a controlled experiment examining how varying the transparency of limitations of BKT impacts understanding and trust. Our results show that, counterintuitively, providing some information on the algorithm’s limitations is not always better than providing no information. The success of the methods from our BKT study suggests avenues for the use of CTA in systematically building evidence-based explanations to increase end user understanding of other complex AI algorithms in learning analytics as well as other domains.
Knowledge components (KCs) have many applications. In computing education, knowing the demonstration of specific KCs has been challenging. This paper introduces an entirely data-driven approach for (i) discovering KCs and (ii) demonstrating KCs, using students' actual code submissions. Our system is based on two expected properties of KCs: (i) generate learning curves following the power law of practice, and (ii) are predictive of response correctness. We train a neural architecture (named KC-Finder) that classifies the correctness of student code submissions and captures problem-KC relationships. Our evaluation on data from 351 students in an introductory Java course shows that the learned KCs can generate reasonable learning curves and predict code submission correctness. At the same time, some KCs can be interpreted to identify programming skills. We compare the learning curves described by our model to four baselines, showing that (i) identifying KCs with naive methods is a difficult task and (ii) our learning curves exhibit a substantially better curve fit. Our work represents a first step in solving the data-driven KC discovery problem in computing education.
Shi, Y; Schmucker, R; Chi, M; Barnes, T; Price, T
(, Springer)
Knowledge components (KCs) have many applications. In computing education, knowing the demonstration of specific KCs has been challenging. This paper introduces an entirely data-driven approach for (i) discovering KCs and (ii) demonstrating KCs, using students’ actual code submissions. Our system is based on two expected properties of KCs: (i) generate learning curves following the power law of practice, and (ii) are predictive of response correctness. We train a neural architecture (named KC-Finder) that classifies the correctness of student code submissions and captures problem-KC relationships. Our evaluation on data from 351 students in an introductory Java course shows that the learned KCs can generate reasonable learning curves and predict code submission correctness. At the same time, some KCs can be interpreted to identify programming skills. We compare the learning curves described by our model to four baselines, showing that (i) identifying KCs with naive methods is a difficult task and (ii) our learning curves exhibit a substantially better curve fit. Our work represents a first step in solving the data-driven KC discovery problem in computing education.
As demand grows for job-ready data science professionals, there is increasing recognition that traditional training often falls short in cultivating the higher-order reasoning and real-world problem-solving skills essential to the field. A foundational step toward addressing this gap is the identification and organization of knowledge components (KCs) that underlie data science problem solving (DSPS). KCs represent conditional knowledge—knowing about appropriate actions given particular contexts or conditions—and correspond to the critical decisions data scientists must make throughout the problem-solving process. While existing taxonomies in data science education support curriculum development, they often lack the granularity and focus needed to support the assessment and development of DSPS skills. In this paper, we present a novel framework that combines the strengths of large language models (LLMs) and human expertise to identify, define, and organize KCs specific to DSPS. We treat LLMs as ``knowledge engineering assistants" capable of generating candidate KCs by drawing on their extensive training data, which includes a vast amount of domain knowledge and diverse sets of real-world DSPS cases. Our process involves prompting multiple LLMs to generate decision points, synthesizing and refining KC definitions across models, and using sentence-embedding models to infer the underlying structure of the resulting taxonomy. Human experts then review and iteratively refine the taxonomy to ensure validity. This human-AI collaborative workflow offers a scalable and efficient proof-of-concept for LLM-assisted knowledge engineering. The resulting KC taxonomy lays the groundwork for developing fine-grained assessment tools and adaptive learning systems that support deliberate practice in DSPS. Furthermore, the framework illustrates the potential of LLMs not just as content generators but as partners in structuring domain knowledge to inform instructional design. Future work will involve extending the framework by generating a directed graph of KCs based on their input-output dependencies and validating the taxonomy through expert consensus and learner studies. This approach contributes to both the practical advancement of DSPS coaching in data science education and the broader methodological toolkit for AI-supported knowledge engineering.
McLaren, B, Herckis, L, Teffera, L, Branstetter, L, Rose, CP, Sakr, M, Kisow, M, Reis, R, Rinsem, M, Alenius, M, and Miller, L. Community college information technology education: Curriculum mapping, a learning science framework, and AI learning technologies.. Retrieved from https://par.nsf.gov/biblio/10545601.
McLaren, B, Herckis, L, Teffera, L, Branstetter, L, Rose, CP, Sakr, M, Kisow, M, Reis, R, Rinsem, M, Alenius, M, & Miller, L. Community college information technology education: Curriculum mapping, a learning science framework, and AI learning technologies.. Retrieved from https://par.nsf.gov/biblio/10545601.
McLaren, B, Herckis, L, Teffera, L, Branstetter, L, Rose, CP, Sakr, M, Kisow, M, Reis, R, Rinsem, M, Alenius, M, and Miller, L.
"Community college information technology education: Curriculum mapping, a learning science framework, and AI learning technologies.". Country unknown/Code not available: AERA 2024, the 2024 Annual Meeting of American Educational Research Association (AERA). https://par.nsf.gov/biblio/10545601.
@article{osti_10545601,
place = {Country unknown/Code not available},
title = {Community college information technology education: Curriculum mapping, a learning science framework, and AI learning technologies.},
url = {https://par.nsf.gov/biblio/10545601},
abstractNote = {Abstract. Most jobs in the digital economy require 4-year university degrees, excluding many community college students. To help these students join the digital economy, our project team is developing AI-based learning technology using a novel approach. First, we employ curriculum mapping to analyze courses and identify knowledge components (KCs) that are positioned to impact student outcomes. We triangulate our results using student learning data and expert-provided qualitative assessment. We then employ the Knowledge, Learning and Instruction framework to align KCs with individual tutoring and collaborative learning. This analysis is guiding us in developing intelligent tutors and collaborative learning technology, empirically-tested forms of AI-based learning technology, to support IT students. In this paper, we describe our innovative approach and results thus far.},
journal = {},
publisher = {AERA 2024, the 2024 Annual Meeting of American Educational Research Association (AERA)},
author = {McLaren, B and Herckis, L and Teffera, L and Branstetter, L and Rose, CP and Sakr, M and Kisow, M and Reis, R and Rinsem, M and Alenius, M and Miller, L},
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.