skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on February 12, 2026

Title: Construction and Preliminary Validation of a Dynamic Programming Concept Inventory
Concept inventories are standardized assessments that evaluate student understanding of key concepts within academic disciplines. While prevalent across STEM fields, their development lags for advanced computer science topics like dynamic programming (DP)---an algorithmic technique that poses significant conceptual challenges for undergraduates. To fill this gap, we developed and validated a Dynamic Programming Concept Inventory (DPCI). We detail the iterative process used to formulate multiple-choice questions targeting known student misconceptions about DP concepts identified through prior research studies. We discuss key decisions, tradeoffs, and challenges faced in crafting probing questions to subtly reveal these conceptual misunderstandings. We conducted a preliminary psychometric validation by administering the DPCI to 172 undergraduate CS students finding our questions to be of appropriate difficulty and effectively discriminating between differing levels of student understanding. Taken together, our validated DPCI will enable instructors to accurately assess student mastery of DP. Moreover, our approach for devising a concept inventory for an advanced theoretical computer science concept can guide future efforts to create assessments for other under-evaluated areas currently lacking coverage.  more » « less
Award ID(s):
2434364
PAR ID:
10627784
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ;
Publisher / Repository:
ACM
Date Published:
ISBN:
9798400705311
Page Range / eLocation ID:
325 to 331
Format(s):
Medium: X
Location:
Pittsburgh, PA, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Pair programming is a popular strategy in computer science education to teach programming to novices. In this study, we examined the effect of three different pair programming conditions on up- per elementary school students’ CS conceptual understanding. The three conditions were one-computer with roles (1C with roles), two computers without roles (2C no roles), and two computers with roles (2C with roles). These students were engaged in four days of computer programming activities and took the CS concept assessment, CS attitudes, and collaboration perceptions before and after the activities. We used the validated E-CSCA (Elementary Computer Science Concepts Assessment) to measure elementary students’ understanding of CS concepts. We tested the relation- ship of different pair programming conditions on the students’ CS conceptual understanding and found that different conditions impacted students’ CS conceptual understanding, wherein students in 2C roles demonstrated better CS learning than the other two conditions. The results also showed no changes in students’ CS attitudes and perceptions of collaboration before and after the activities. Furthermore, the results indicated no significant impact of these attitudinal factors on students’ learning CS concepts in pair programming settings. Our study highlights the importance of the roles and number of computers in pair programming settings, especially for elementary students. 
    more » « less
  2. null; null; null; null (Ed.)
    We reflect on our ongoing journey in the educational Cybersecurity Assessment Tools (CATS) Project to create two concept inventories for cybersecurity. We identify key steps in this journey and important questions we faced. We explain the decisions we made and discuss the consequences of those decisions, highlighting what worked well and what might have gone better. The CATS Project is creating and validating two concept inventories—conceptual tests of understanding—that can be used to measure the effectiveness of various approaches to teaching and learning cybersecurity. The Cybersecurity Concept Inventory (CCI) is for students who have recently completed any first course in cybersecurity; the Cybersecurity Curriculum Assessment (CCA) is for students who have recently completed an undergraduate major or track in cybersecurity. Each assessment tool comprises 25 multiple-choice questions (MCQs) of various difficulties that target the same five core concepts, but the CCA assumes greater technical background. Key steps include defining project scope, identifying the core concepts, uncovering student misconceptions, creating scenarios, drafting question stems, developing distractor answer choices, generating educational materials, performing expert reviews, recruiting student subjects, organizing workshops, building community acceptance, forming a team and nurturing collaboration, adopting tools, and obtaining and using funding. Creating effective MCQs is difficult and time-consuming, and cybersecurity presents special challenges. Because cybersecurity issues are often subtle, where the adversarial model and details matter greatly, it is challenging to construct MCQs for which there is exactly one best but non-obvious answer. We hope that our experiences and lessons learned may help others create more effective concept inventories and assessments in STEM. 
    more » « less
  3. We reflect on our ongoing journey in the educational Cybersecurity Assessment Tools (CATS) Project to create two concept inventories for cybersecurity. We identify key steps in this journey and important questions we faced. We explain the decisions we made and discuss the consequences of those decisions, highlighting what worked well and what might have gone better. The CATS Project is creating and validating two concept inventories—conceptual tests of understanding—that can be used to measure the effectiveness of various approaches to teaching and learning cybersecurity. The Cybersecurity Concept Inventory (CCI) is for students who have recently completed any first course in cybersecurity; the Cybersecurity Curriculum Assessment (CCA) is for students who have recently completed an undergraduate major or track in cybersecurity. Each assessment tool comprises 25 multiple-choice questions (MCQs) of various difficulties that target the same five core concepts, but the CCA assumes greater technical background. Key steps include defining project scope, identifying the core concepts, uncovering student misconceptions, creating scenarios, drafting question stems, developing distractor answer choices, generating educational materials, performing expert reviews, recruiting student subjects, organizing workshops, building community acceptance, forming a team and nurturing collaboration, adopting tools, and obtaining and using funding. Creating effective MCQs is difficult and time-consuming, and cybersecurity presents special challenges. Because cybersecurity issues are often subtle, where the adversarial model and details matter greatly, it is challenging to construct MCQs for which there is exactly one best but non-obvious answer. We hope that our experiences and lessons learned may help others create more effective concept inventories and assessments in STEM. 
    more » « less
  4. The expansion of computer science (CS) into K-12 contexts has resulted in a diverse ecosystem of curricula designed for various grade levels, teaching a variety of concepts, and using a wide array of different programming languages and environments. Many students will learn more than one programming language over the course of their studies. There is a growing need for computer science assessment that can measure student learning over time, but the multilingual learning pathways create two challenges for assessment in computer science. First, there are not validated assessments for all of the programming languages used in CS classrooms. Second, it is difficult to measure growth in student understanding over time when students move between programming languages as they progress in their CS education. In this position paper, we argue that the field of computing education research needs to develop methods and tools to better measure students' learning over time and across the different programming languages they learn along the way. In presenting this position, we share data that shows students approach assessment problems differently depending on the programming language, even when the problems are conceptually isomorphic, and discuss some approaches for developing multilingual assessments of student learning over time. 
    more » « less
  5. Security failures in software arising from failures to practice secure programming are commonplace. Improving this situation requires that practitioners have a clear understanding of the foundational concepts in secure programming to serve as a basis for building new knowledge and responding to new challenges. We developed a Secure Programing Concept Inventory (SPCI) to measure students' understanding of foundational concepts in secure programming. The SPCI consists of thirty-five multiple choice items targeting ten concept areas of secure programming. The SPCI was developed by establishing the content domain of secure programming, developing a pool of test items, multiple rounds of testing and refining the items, and finally testing and inventory reduction to produce the final scale. Scale development began by identifying the core concepts in secure programming. A Delphi study was conducted with thirty practitioners from industry, academia, and government to establish the foundational concepts of secure programming and develop a concept map. To build a set of misconceptions in secure programming, the researchers conducted interviews with students and instructors in the field. These interviews were analyzed using content analysis. This resulted in a taxonomy of misconceptions in secure programming covering ten concept areas. An item pool of multiple-choice questions was developed. The item pool of 225 was administered to a population of 690 students across four institutions. Item discrimination and item difficulty scores were calculated, and the best performing items were mapped to the misconception categories to create subscales for each concept area resulting in a validated 35 item scale. 
    more » « less