skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Latent Skill Mining and Labeling from Courseware Content
A model that maps the requisite skills, or knowledge components, to the contents of an online course is necessary to implement many adaptive learning technologies. However, developing a skill model and tagging courseware contents with individual skills can be expensive and error prone. We propose a technology to automatically identify latent skills from instructional text on existing online courseware called Smart (Skill Model mining with Automated detection of Resemblance among Texts). Smart is capable of mining, labeling, and mapping skills without using an existing skill model or student learning (aka response) data. The goal of our proposed approach is to mine latent skills from assessment items included in existing courseware, provide discovered skills with human-friendly labels, and map didactic paragraph texts with skills. This way, mapping between assessment items and paragraph texts is formed. In doing so, automated skill models produced by Smart will reduce the workload of courseware developers while enabling adaptive online content at the launch of the course. In our evaluation study, we applied Smart to two existing authentic online courses. We then compared machine-generated skill models and human-crafted skill models in terms of the accuracy of predicting students’ learning. We also evaluated the similarity between machine-generated and human-crafted skill models. The results show that student models based on Smart-generated skill models were equally predictive of students’ learning as those based on human-crafted skill models— as validated on two OLI (Open Learning Initiative) courses. Also, Smart can generate skill models that are highly similar to human-crafted models as evidenced by the normalized mutual information (NMI) values.  more » « less
Award ID(s):
2016966
PAR ID:
10423952
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Journal of educational data mining
Volume:
14
Issue:
2
ISSN:
2157-2100
Page Range / eLocation ID:
1-31
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. A model that maps the requisite skills, or knowledge components, to the contents of an online course is necessary to implement many adaptive learning technologies. However, developing a skill model and tagging courseware contents with individual skills can be expensive and error prone. We propose a technology to automatically identify latent skills from instructional text on existing online courseware called Smart (Skill Model mining with Automated detection of Resemblance among Texts). Smart is capable of mining, labeling, and mapping skills without using an existing skill model or student learning (aka response) data. The goal of our proposed approach is to mine latent skills from assessment items included in existing courseware, provide discovered skills with human-friendly labels, and map didactic paragraph texts with skills. This way, mapping between assessment items and paragraph texts is formed. In doing so, automated skill models produced by Smart will reduce the workload of courseware developers while enabling adaptive online content at the launch of the course. In our evaluation study, we applied Smart to two existing authentic online courses. We then compared machine-generated skill models and human-crafted skill models in terms of the accuracy of predicting students’ learning. We also evaluated the similarity between machine-generated and human-crafted skill models. The results show that student models based on Smart-generated skill models were equally predictive of students’ learning as those based on human-crafted skill models— as validated on two OLI (Open Learning Initiative) courses. Also, Smart can generate skill models that are highly similar to human-crafted models as evidenced by the normalized mutual information (NMI) values. 
    more » « less
  2. The Academic Vigilance Environment (AVE) presented is a combination of two innovative tools. AchieveUp's micro-credentialing system identifies and showcase students' skills, while KnowGap's provides personalized learning content that fills knowledge gaps. To meet the growing demand for micro-credentials, AchieveUp integrates this capability into established courses using online quizzes to evaluate skills from a predefined test bank. By leveraging responses from digitized quiz-based assessments, we have developed a synergistic approach with online assessment and remediation protocols. Our Python-based toolkit enables undergraduate tutors to identify and address knowledge gaps among at-risk learners in higher-education courses. Through digitized assessments, personalized tutoring, and automated skill analysis scripts integrated into Canvas LMS, students receive skill-specific badges that provide incremental motivation and enhance their self-efficacy. In a required electrical and computer engineering course here at UCF, the implemented software allowed for the distribution of 17 unique digital badges suitable for LinkedIn posting, benefiting both students and employers by verifying skills, while also providing instructors with insights to improve course instruction. 
    more » « less
  3. This paper presents an innovative courseware project based on the Adaptive Distributed Learning (ADL) Initiative’s Total Learning Architecture (TLA [1]), which encompasses a technical framework for education and training based on a data strategy built around open standards to support interoperability across diverse organizations and products ([2]). This framework includes definitions of a set of policies, specifications, and standards that enable a future learning ecosystem to facilitate lifelong learning principles promoting personalized and flexible learning environments that include both formal and informal activities [3]. In Fall 2023, a TLA- inspired course framework was implemented in a data visualization course for senior undergraduates and graduate students, using Moodle and a Learning Record Store (LRS) to track over 200,000 learning records. This system allowed instructors to visually monitor online learning activities for the whole class as well as selected individual learners. As future work, the course will expand to 10 STEM courses across 11 universities in the next three years as part of an existing NSF commitment. 
    more » « less
  4. Physics instructors and education researchers use research-based assessments (RBAs) to evaluate students' preparation for physics courses. This preparation can cover a wide range of constructs including mathematics and physics content. Using separate mathematics and physics RBAs consumes course time. We are developing a new RBA for introductory mechanics as an online test using both computerized adaptive testing and cognitive diagnostic models. This design allows the adaptive RBA to assess mathematics and physics content knowledge within a single assessment. In this article, we used an evidence-centered design framework to inform the extent to which our models of skills students develop in physics courses fit the data from three mathematics RBAs. Our dataset came from the LASSO platform and includes 3,491 responses from the Calculus Concept Assessment, Calculus Concept Inventory, and Pre-calculus Concept Assessment. Our model included five skills: apply vectors, conceptual relationships, algebra, visualizations, and calculus. The "deterministic inputs, noisy 'and' gate'' (DINA) analyses demonstrated a good fit for the five skills. The classification accuracies for the skills were satisfactory. Including items from the three mathematics RBAs in the item bank for the adaptive RBA will provide a flexible assessment of these skills across mathematics and physics content areas that can adapt to instructors' needs. 
    more » « less
  5. The rise of machine learning (ML) technology inspires a boom in its applications in electronic design automation (EDA) and helps improve the degree of automation in chip designs. However, manually crafting ML models remains a complex and time-consuming process because it requires extensive human expertise and tremendous engineering efforts to carefully extract features and design model architectures. In this work, we leverage automated ML techniques to automate the ML model development for routability prediction, a well-established technique that can help to guide cell placement toward routable solutions. We present an automated feature selection method to identify suitable features for model inputs. We develop a neural architecture search method to search for high-quality neural architectures without human interference. Our search method supports various operations and highly flexible connections, leading to architectures significantly different from all previous human-crafted models. Our experimental results demonstrate that our automatically generated models clearly outperform multiple representative manually crafted solutions with a superior 9.9% improvement. Moreover, compared with human-crafted models, which easily take weeks or months to develop, our efficient automated machine-learning framework completes the whole model development process in only 1 day. 
    more » « less