skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on April 16, 2026

Title: From checkboxes to competitive edge: Crafting effective programmatic assessments.
We discuss the importance of program assessment and evaluation–particularly in the context of grant development–and the key components of a competitive assessment plan.  more » « less
Award ID(s):
2419948
PAR ID:
10648077
Author(s) / Creator(s):
;
Publisher / Repository:
Penn State https://www.huck.psu.edu/seed-funding-large-proposal-catalysis/huck-catalysis/events-training
Date Published:
Format(s):
Medium: X
Institution:
Penn State University
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Unlike summative assessment that is aimed at grading students at the end of a unit or academic term, formative assessment is assess- ment for learning, aimed at monitoring ongoing student learning to provide feedback to both student and teacher, so that learning gaps can be addressed during the learning process. Education research points to formative assessment as a crucial vehicle for improving student learning. Formative assessment in K-12 CS and program- ming classrooms remains a crucial unaddressed need. Given that assessment for learning is closely tied to teacher pedagogical con- tent knowledge, formative assessment literacy needs to also be a topic of CS teacher PD. This position paper addresses the broad need to understand formative assessment and build a framework to understand the what, why, and how of formative assessment of introductory programming in K-12 CS. It shares specific pro- gramming examples to articulate the cycle of formative assessment, diagnostic evaluation, feedback, and action. The design of formative assessment items is informed by CS research on assessment design, albeit related largely to summative assessment and in CS1 contexts, and learning of programming, especially student misconceptions. It describes what teacher formative assessment literacy PD should entail and how to catalyze assessment-focused collaboration among K-12 CS teachers through assessment platforms and repositories. 
    more » « less
  2. null (Ed.)
    Evidence-centered design (ECD) is an assessment framework tailored to provide structure and rigor to the assessment development process, and also to generate evidence of assessment validity by tightly coupling assessment tasks with focal knowledge, skills, and abilities (FKSAs). This framework is particularly well-suited to FKSAs that are complex and multi-part (Mislevy and Haertel, 2006), as is the case with much of the focal content within the computer science (CS) domain. This paper presents an applied case of ECD used to guide assessment development in the context of a redesigned introductory CS curriculum. In order to measure student learning of CS skills and content taught through the curriculum, knowledge assessments were written and piloted. The use of ECD provided an organizational framework for assessment development efforts, offering assessment developers a clear set of steps with accompanying documentation and decision points, as well as providing robust validity evidence for the assessment. The description of an application of ECD for assessment development within the context of an introductory CS course illustrates its utility and effectiveness, and also provides a guide for researchers carrying out related work. 
    more » « less
  3. This article reviews case studies which have used remote sensing data for different aspects of flood crop loss assessment. The review systematically finds a total of 62 empirical case studies from the past three decades. The number of case studies has recently been increased because of increased availability of remote sensing data. In the past, flood crop loss assessment was very generalized and time-intensive because of the dependency on the survey-based data collection. Remote sensing data availability makes rapid flood loss assessment possible. This study groups flood crop loss assessment approaches into three broad categories: flood-intensity-based approach, crop-condition-based approach, and a hybrid approach of the two. Flood crop damage assessment is more precise when both flood information and crop condition are incorporated in damage assessment models. This review discusses the strengths and weaknesses of different loss assessment approaches. Moderate Resolution Imaging Spectroradiometer (MODIS) and Landsat are the dominant sources of optical remote sensing data for flood crop loss assessment. Remote-sensing-based vegetation indices (VIs) have significantly been utilized for crop damage assessments in recent years. Many case studies also relied on microwave remote sensing data, because of the inability of optical remote sensing to see through clouds. Recent free-of-charge availability of synthetic-aperture radar (SAR) data from Sentinel-1 will advance flood crop damage assessment. Data for the validation of loss assessment models are scarce. Recent advancements of data archiving and distribution through web technologies will be helpful for loss assessment and validation. 
    more » « less
  4. Learning analytics uses large amounts of data about learner interactions in digital learning environments to understand and enhance learning. Although measurement is a central dimension of learning analytics, there has thus far been little research that examines links between learning analytics and assessment. This special issue of Computers in Human Behavior highlights 11 studies that explore how links between learning analytics and assessment can be strengthened. The contributions of these studies can be broadly grouped into three categories: analytics for assessment (learning analytic approaches as forms of assessment); analytics of assessment (applications of learning analytics to answer questions about assessment practices); and validity of measurement (conceptualization of and practical approaches to assuring validity in measurement in learning analytics). The findings of these studies highlight pressing scientific and practical challenges and opportunities in the connections between learning analytics and assessment that will require interdisciplinary teams to address: task design, analysis of learning progressions, trustworthiness, and fairness – to unlock the full potential of the links between learning analytics and assessment. 
    more » « less
  5. Instructors routinely use automated assessment methods to evalu- ate the semantic qualities of student implementations and, some- times, test suites. In this work, we distill a variety of automated assessment methods in the literature down to a pair of assessment models. We identify pathological assessment outcomes in each model that point to underlying methodological flaws. These the- oretical flaws broadly threaten the validity of the techniques, and we actually observe them in multiple assignments of an introduc- tory programming course. We propose adjustments that remedy these flaws and then demonstrate, on these same assignments, that our interventions improve the accuracy of assessment. We believe that with these adjustments, instructors can greatly improve the accuracy of automated assessment. 
    more » « less