skip to main content


Title: Creating a Cybersecurity Concept Inventory: A Status Report on the CATS Project
We report on the status of our Cybersecurity Assess- ment Tools (CATS) project that is creating and val- idating a concept inventory for cybersecurity, which assesses the quality of instruction of any first course in cybersecurity. In fall 2014, we carried out a Del- phi process that identified core concepts of cyber- security. In spring 2016, we interviewed twenty-six students to uncover their understandings and mis- conceptions about these concepts. In fall 2016, we generated our first assessment tool–a draft Cyberse- curity Concept Inventory (CCI), comprising approx- imately thirty multiple-choice questions. Each ques- tion targets a concept; incorrect answers are based on observed misconceptions from the interviews. This year we are validating the draft CCI using cognitive interviews, expert reviews, and psychometric testing. In this paper, we highlight our progress to date in developing the CCI. The CATS project provides infrastructure for a rig- orous evidence-based improvement of cybersecurity education. The CCI permits comparisons of different instructional methods by assessing how well students learned the core concepts of the field (especially ad- versarial thinking), where instructional methods re- fer to how material is taught (e.g., lab-based, case- studies, collaborative, competitions, gaming). Specif- ically, the CCI is a tool that will enable researchers to scientifically quantify and measure the effect of their approaches to, and interventions in, cybersecurity ed- ucation.  more » « less
Award ID(s):
1819521
NSF-PAR ID:
10110275
Author(s) / Creator(s):
; ; ; ; ; ; ;
Date Published:
Journal Name:
National Cyber Summit
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. For two days in February 2018, 17 cybersecurity ed- ucators and professionals from government and in- dustry met in a “hackathon” to refine existing draft multiple-choice test items, and to create new ones, for a Cybersecurity Concept Inventory (CCI) and Cyber- security Curriculum Assessment (CCA) being devel- oped as part of the Cybersecurity Assessment Tools (CATS) Project. We report on the results of the CATS Hackathon, discussing the methods we used to develop test items, highlighting the evolution of a sample test item through this process, and offer- ing suggestions to others who may wish to organize similar hackathons. Each test item embodies a scenario, question stem, and five answer choices. During the Hackathon, par- ticipants organized into teams to (1) Generate new scenarios and question stems, (2) Extend CCI items into CCA items, and generate new answer choices for new scenarios and stems, and (3) Review and refine draft CCA test items. The CATS Project provides rigorous evidence- based instruments for assessing and evaluating educa- tional practices; these instruments can help identify pedagogies and content that are effective in teach- ing cybersecurity. The CCI measures how well stu- dents understand basic concepts in cybersecurity— especially adversarial thinking—after a first course in the field. The CCA measures how well students understand core concepts after completing a full cy- bersecurity curriculum. 
    more » « less
  2. null ; null ; null ; null (Ed.)
    We reflect on our ongoing journey in the educational Cybersecurity Assessment Tools (CATS) Project to create two concept inventories for cybersecurity. We identify key steps in this journey and important questions we faced. We explain the decisions we made and discuss the consequences of those decisions, highlighting what worked well and what might have gone better. The CATS Project is creating and validating two concept inventories—conceptual tests of understanding—that can be used to measure the effectiveness of various approaches to teaching and learning cybersecurity. The Cybersecurity Concept Inventory (CCI) is for students who have recently completed any first course in cybersecurity; the Cybersecurity Curriculum Assessment (CCA) is for students who have recently completed an undergraduate major or track in cybersecurity. Each assessment tool comprises 25 multiple-choice questions (MCQs) of various difficulties that target the same five core concepts, but the CCA assumes greater technical background. Key steps include defining project scope, identifying the core concepts, uncovering student misconceptions, creating scenarios, drafting question stems, developing distractor answer choices, generating educational materials, performing expert reviews, recruiting student subjects, organizing workshops, building community acceptance, forming a team and nurturing collaboration, adopting tools, and obtaining and using funding. Creating effective MCQs is difficult and time-consuming, and cybersecurity presents special challenges. Because cybersecurity issues are often subtle, where the adversarial model and details matter greatly, it is challenging to construct MCQs for which there is exactly one best but non-obvious answer. We hope that our experiences and lessons learned may help others create more effective concept inventories and assessments in STEM. 
    more » « less
  3. We reflect on our ongoing journey in the educational Cybersecurity Assessment Tools (CATS) Project to create two concept inventories for cybersecurity. We identify key steps in this journey and important questions we faced. We explain the decisions we made and discuss the consequences of those decisions, highlighting what worked well and what might have gone better. The CATS Project is creating and validating two concept inventories—conceptual tests of understanding—that can be used to measure the effectiveness of various approaches to teaching and learning cybersecurity. The Cybersecurity Concept Inventory (CCI) is for students who have recently completed any first course in cybersecurity; the Cybersecurity Curriculum Assessment (CCA) is for students who have recently completed an undergraduate major or track in cybersecurity. Each assessment tool comprises 25 multiple-choice questions (MCQs) of various difficulties that target the same five core concepts, but the CCA assumes greater technical background. Key steps include defining project scope, identifying the core concepts, uncovering student misconceptions, creating scenarios, drafting question stems, developing distractor answer choices, generating educational materials, performing expert reviews, recruiting student subjects, organizing workshops, building community acceptance, forming a team and nurturing collaboration, adopting tools, and obtaining and using funding. Creating effective MCQs is difficult and time-consuming, and cybersecurity presents special challenges. Because cybersecurity issues are often subtle, where the adversarial model and details matter greatly, it is challenging to construct MCQs for which there is exactly one best but non-obvious answer. We hope that our experiences and lessons learned may help others create more effective concept inventories and assessments in STEM. 
    more » « less
  4. null (Ed.)
    We reflect on our ongoing journey in the educational Cybersecurity Assessment Tools (CATS) Project to create two concept inventories for cybersecurity. We identify key steps in this journey and important questions we faced. We explain the decisions we made and discuss the consequences of those decisions, highlighting what worked well and what might have gone better. The CATS Project is creating and validating two concept inventories---conceptual tests of understanding---that can be used to measure the effectiveness of various approaches to teaching and learning cybersecurity. The Cybersecurity Concept Inventory (CCI) is for students who have recently completed any first course in cybersecurity; the Cybersecurity Curriculum Assessment (CCA) is for students who have recently completed an undergraduate major or track in cybersecurity. Each assessment tool comprises 25 multiple-choice questions (MCQs) of various difficulties that target the same five core concepts, but the CCA assumes greater technical background. 
    more » « less
  5. We present and analyze results from a pilot study that explores how crowdsourcing can be used in the process of generating distractors (incorrect an-swer choices) in multiple-choice concept inventories (conceptual tests of under-standing). To our knowledge, we are the first to propose and study this approach. Using Amazon Mechanical Turk, we collected approximately 180 open-ended responses to several question stems from the Cybersecurity Concept Inventory of the Cybersecurity Assessment Tools Project and from the Digital Logic Concept Inventory. We generated preliminary distractors by filtering responses, grouping similar responses, selecting the four most frequent groups, and refining a repre-sentative distractor for each of these groups. We analyzed our data in two ways. First, we compared the responses and resulting distractors with those from the aforementioned inventories. Second, we obtained feedback from Amazon Mechanical Turk on the resulting new draft test items (including distractors) from additional subjects. Challenges in using crowdsourcing include controlling the selection of subjects and filtering out re-sponses that do not reflect genuine effort. Despite these challenges, our results suggest that crowdsourcing can be a very useful tool in generating effective dis-tractors (attractive to subjects who do not understand the targeted concept). Our results also suggest that this method is faster, easier, and cheaper than is the tra-ditional method of having one or more experts draft distractors, building on talk-aloud interviews with subjects to uncover their misconceptions. Our results are significant because generating effective distractors is one of the most difficult steps in creating multiple-choice assessments. 
    more » « less