skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Comparing Learning Taxonomies With Computer Science K-12 Standards
There is a need to analyze state computer science standards to determine their cognitive complexity and alignment across grades. However, due to the recency of these standards, there is very little research on the topic, including the use of various educational taxonomies as analysis tools. The purpose of this paper is to answer the question, How do Bloom’s Revised and the SOLO taxonomies compare in their analysis of computer science standards? We categorized state CS standards according to their level in Bloom’s Revised Taxonomy and the SOLO taxonomy. Analyzing state CS standards using the Bloom’s or using the SOLO taxonomy produces wide areas of agreement but also some differences that might be important in various use cases, such as aligning standards across grade levels or determining whether a standard addresses a higher-order thinking skill.  more » « less
Award ID(s):
2311746
PAR ID:
10643574
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
AERA
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Introduction: State and national learning standards play an important role in articulating and standardizing K-12 computer science education. However, these standards have not been extensively researched, especially in terms of their cognitive complexity. Analyses of cognitive complexity, accomplished via comparison of standards to a taxonomy of learning, can provide an important data point for understanding the prevalence of higher-order versus lower-order thinking skills in a set of standards. Objective: The objective of this study is to answer the research question: How do state and national K-12 computer science standards compare in terms of their cognitive complexity? Methods: We used Bloom’s Revised Taxonomy in order to assess the cognitive complexity of a dataset consisting of state (n = 9695) computer science standards and the 2017 Computer Science Teachers Association (CSTA) standards (n = 120). To enable a quantitative comparison of the standards, we assigned numbers to the Bloom’s levels. Results: The CSTA standards had a higher average level of cognitive complexity than most states’ standards. States were more likely to have standards at the lowest Bloom’s level than the CSTA standards. There was wide variety of cognitive complexity by state and, within a state, there was variation by grade band. For the states, standards at the evaluate level were least common; in the CSTA standards, the remember level was least common. Discussion: While there are legitimate critiques of Bloom’s Revised Taxonomy, it may nonetheless be a useful tool for assessing learning standards, especially comparatively. Our results point to differences between and within state and national standards. Recognition of these differences and their implications can be leveraged by future standards writers, curriculum developers, and computing education researchers to craft standards that best meet the needs of all learners. 
    more » « less
  2. In the United States, state learning standards guide curriculum, assessment, teacher certification, and other key drivers of the student learning experience. Investigating standards allows us to answer a lot of big questions about the field of K-12 computer science (CS) education. Our team has created a dataset of state-level K-12 CS standards for all US states that currently have such standards (n = 42). This dataset was created by CS subject matter experts, who - for each of the approximately 10,000 state CS standards - manually tagged its assigned grade level/band, category/topic, and, if applicable, which CSTA standard it is identical or similar to. We also determined the standards' cognitive complexity using Bloom's Revised Taxonomy. Using the dataset, we were able to analyze each state's CS standards using a variety of metrics and approaches. To our knowledge, this is the first comprehensive, publicly available dataset of state CS standards that includes the factors mentioned previously. We believe that this dataset will be useful to other CS education researchers, including those who want to better understand the state and national landscape of K-12 CS education in the US, the characteristics of CS learning standards, the coverage of particular CS topics (e.g., cybersecurity, AI), and many other topics. In this lightning talk, we will introduce the dataset's features as well as some tools that we have developed (e.g., to determine a standard's Bloom's level) that may be useful to others who use the dataset. 
    more » « less
  3. Introduction: Learning standards are a crucial determinant of computer science (CS) education at the K-12 level, but they are not often researched despite their importance. We sought to address this gap with a mixed-methods study examining state and national K-12 CS standards. Research Question: What are the similarities and differences between state and national computer science standards? Methods: We tagged the state CS standards (n = 9695) according to their grade band/level, topic, course, and similarity to a Computer Science Teachers Association (CSTA) standard. We also analyzed the content of standards similar to CSTA standards to determine their topics, cognitive complexity, and other features. Results: We found some commonalities amidst broader diversity in approaches to organization and content across the states, relative to the CSTA standards. The content analysis showed that a common difference between state and CSTA standards is that the state standards tend to include concrete examples. We also found differences across states in how similar their standards are to CSTA standards, as well as differences in how cognitively complex the standards are. Discussion: Standards writers face many tensions and trade-offs, and this analysis shows how – in general terms – various states have chosen to manage those trade-offs in writing standards. For example, adding examples can improve clarity and specificity, but perhaps at the cost of brevity and longevity. A better understanding of the landscape of state standards can assist future standards writers, curriculum developers, and researchers in their work. 
    more » « less
  4. The "Computer Science for All" initiative advocates for universal access to computer science (CS) instruction. A key strategy toward this end has been to establish CS content standards outlining what all students should have the opportunity to learn. Standards can support curriculum quality and access to quality CS instruction, but only if they are used to inform curriculum design and instructional practice. Professional learning offered to teachers of CS has typically focused on learning to implement a specific curriculum, rather than deepening understanding of CS concepts. We set out to develop a set of educative resources, formative assessment tools and teacher professional development (PD) sessions to support middle school CS teachers' knowledge of CS standards and standards-aligned formative assessment literacy. Our PD and associated resources focus on five CS standards in the Algorithm and Programming strand and are meant to support teachers using any CS curriculum or programming language. In this experience report, we share what we learned from implementing our standards-based PD with four middle school CS teachers. Teachers initially perceived standards as irrelevant to their teaching but they came to appreciate how a deeper understanding of CS concepts could enhance their instructional practice. Analysis of PD observations and exit surveys, teacher interviews, and teacher responses to a survey assessing CS pedagogical content knowledge demonstrated the complexity of using content standards as a driver of high-quality CS instruction at the middle school level, and reinforced our position that more standards-focused PD is needed. 
    more » « less
  5. Introduction: Because developing integrated computer science (CS) curriculum is a resource-intensive process, there is interest in leveraging the capabilities of AI tools, including large language models (LLMs), to streamline this task. However, given the novelty of LLMs, little is known about their ability to generate appropriate curriculum content. Research Question: How do current LLMs perform on the task of creating appropriate learning activities for integrated computer science education? Methods: We tested two LLMs (Claude 3.5 Sonnet and ChatGPT 4-o) by providing them with a subset of national learning standards for both CS and language arts and asking them to generate a high-level description of learning activities that met standards for both disciplines. Four humans rated the LLM output – using an aggregate rating approach – in terms of (1) whether it met the CS learning standard, (2) whether it met the language arts learning standard, (3) whether it was equitable, and (4) its overall quality. Results: For Claude AI, 52% of the activities met language arts standards, 64% met CS standards, and the average quality rating was middling. For ChatGPT, 75% of the activities met language arts standards, 63% met CS standards, and the average quality rating was low. Virtually all activities from both LLMs were rated as neither actively promoting nor inhibiting equitable instruction. Discussion: Our results suggest that LLMs are not (yet) able to create appropriate learning activities from learning standards. The activities were generally not usable by classroom teachers without further elaboration and/or modification. There were also grammatical errors in the output, something not common with LLM-produced text. Further, standards in one or both disciplines were often not addressed, and the quality of the activities was often low. We conclude with recommendations for the use of LLMs in curriculum development in light of these findings. 
    more » « less