skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM to 12:00 PM ET on Tuesday, March 25 due to maintenance. We apologize for the inconvenience.


Search for: All records

Award ID contains: 1661263

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. null (Ed.)
    Abstract We systematically compared two coding approaches to generate training datasets for machine learning (ML): (i) a holistic approach based on learning progression levels and (ii) a dichotomous, analytic approach of multiple concepts in student reasoning, deconstructed from holistic rubrics. We evaluated four constructed response assessment items for undergraduate physiology, each targeting five levels of a developing flux learning progression in an ion context. Human-coded datasets were used to train two ML models: (i) an 8-classification algorithm ensemble implemented in the Constructed Response Classifier (CRC), and (ii) a single classification algorithm implemented in LightSide Researcher’s Workbench. Human coding agreement on approximately 700 student responses per item was high for both approaches with Cohen’s kappas ranging from 0.75 to 0.87 on holistic scoring and from 0.78 to 0.89 on analytic composite scoring. ML model performance varied across items and rubric type. For two items, training sets from both coding approaches produced similarly accurate ML models, with differences in Cohen’s kappa between machine and human scores of 0.002 and 0.041. For the other items, ML models trained with analytic coded responses and used for a composite score, achieved better performance as compared to using holistic scores for training, with increases in Cohen’s kappa of 0.043 and 0.117. These items used a more complex scenario involving movement of two ions. It may be that analytic coding is beneficial to unpacking this additional complexity. 
    more » « less
  2. Tanner (Ed.)
    Recent calls in biology education research (BER) have recommended that researchers leverage learning theories and methodologies from other disciplines to investigate the mechanisms by which students to develop sophisticated ideas. We suggest design-based research from the learning sciences is a compelling methodology for achieving this aim. Design-based research investigates the “learning ecologies” that move student thinking toward mastery. These “learning ecologies” are grounded in theories of learning, produce measurable changes in student learning, generate design principles that guide the development of instructional tools, and are enacted using extended, iterative teaching experiments. In this essay, we introduce readers to the key elements of design-based research, using our own research into student learning in undergraduate physiology as an example of design-based research in BER. Then, we discuss how design-based research can extend work already done in BER and foster interdisciplinary collaborations among cognitive and learning scientists, biology education researchers, and instructors. We also explore some of the challenges associated with this methodological approach. 
    more » « less
  3. Vision and Change challenged biology instructors to develop evidence-based instructional approaches that were grounded in the core concepts and competencies of biology. This call for reform provides an opportunity for new educational tools to be incorporated into biology education. In this essay, we advocate for learning progressions as one such educational tool. First, we address what learning progressions are and how they leverage research from the cognitive and learning sciences to inform instructional practices. Next, we use a published learning progression about carbon cycling to illustrate how learning progressions describe the maturation of student thinking about a key topic. Then, we discuss how learning progressions can inform undergraduate biology instruction, citing three particular learning progressions that could guide instruction about a number of key topics taught in introductory biology courses. Finally, we describe some challenges associated with learning progressions in undergraduate biology and some recommendations for how to address these challenges. 
    more » « less
  4. Constructed responses can be used to assess the complexity of student thinking and can be evaluated using rubrics. The two most typical rubric types used are holistic and analytic. Holistic rubrics may be difficult to use with expert-level reasoning that has additive or overlapping language. In an attempt to unpack complexity in holistic rubrics at a large scale, we have developed a systematic approach called deconstruction. We define deconstruction as the process of converting a holistic rubric into defining individual conceptual components that can be used for analytic rubric development and application. These individual components can then be recombined into the holistic score which keeps true to the holistic rubric purpose, while maximizing the benefits and minimizing the shortcomings of each rubric type. This paper outlines the deconstruction process and presents a case study that shows defined concept definitions for a hierarchical holistic rubric developed for an undergraduate physiology-content reasoning context. These methods can be used as one way for assessment developers to unpack complex student reasoning, which may ultimately improve reliability and validation of assessments that are targeted at uncovering large-scale complex scientific reasoning. 
    more » « less