skip to main content

Search for: All records

Creators/Authors contains: "Auby, H."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. his work-in-progress paper expands on a collaboration between engineering education researchers and machine learning researchers to automate the analysis of written responses to conceptually challenging questions in statics and dynamics courses (Authors, 2022). Using the Concept Warehouse (Koretsky et al., 2014), written justifications of ConcepTests (CTs) were gathered from statics and dynamics courses in a diverse set of two- and four-year institutions. Written justifications for CTs have been used to support active learning pedagogies which makes them important to investigate how students put together their problem-solving narratives of understanding. However, despite the large benefit that analysis of student written responses may provide to instructors and researchers, manual review of responses is cumbersome, limits analysis, and can be prone to human bias. In efforts to improve the analysis of student written responses, machine learning has been used in various educational contexts to analyze short and long texts (Burstein et al., 2020; Burstein et al., 2021). Natural Language Processing (NLP) uses transformer-based machine learning models (Brown et al., 2020; Raffel et al., 2019) which can be used through fine-tuning or in-context learning methods. NLP can be used to train algorithms that can automate the coding of written responses. Only a few studies for educational applications have leveraged transformer-based machine learning models further prompting an investigation into its use in STEM education. However, work in NLP has been criticized for heightening the possibility to perpetuate and even amplify harmful stereotypes and implicit biases (Chang et al., 2019; Mayfield et al., 2019). In this study, we detail the aim to use NLP for linguistic justice. Using methods like text summary, topic modeling, and text classification, we identify key aspects of student narratives of understanding in written responses to mechanics and statics CTs. Through this process, we seek to use machine learning to identify different ways students talk about a problem and their understanding at any point in their narrative formation process. Thus, we hope to help reduce human bias in the classroom and through technology by giving instructors and researchers a diverse set of narratives that include insight into their students’ histories, identities, and understanding. These can then be used towards connecting technological knowledge to students’ everyday lives. 
    more » « less
    Free, publicly-accessible full text available June 1, 2024
  2. Free, publicly-accessible full text available June 1, 2024