skip to main content


Search for: All records

Creators/Authors contains: "West, Matthew"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This full research paper explores how second-chance testing can be used as a strategy for mitigating students’ test anxiety in STEM courses, thereby boosting students’ performance and experiences. Second-chance testing is a testing strategy where students are given an opportunity to take an assessment twice. We conducted a mixed-methods study to explore second-chance testing as a potential solution to test anxiety. First, we interviewed a diverse group of STEM students (N = 23) who had taken courses with second-chance testing to ask about the stress and anxiety associated with testing. We then administered a survey on test anxiety to STEM students in seven courses that offered second-chance tests at Midwestern University (N = 448). We found that second-chance testing led to a 30% reduction in students’ reported test anxiety. Students also reported reduced stress throughout the semester, even outside of testing windows, due to the availability of second-chance testing. Our study included an assortment of STEM courses where second-chance testing was deployed, which indicates that second-chance testing is a viable strategy for reducing anxiety in a variety of contexts. We also explored whether the resultant reduction in test anxiety led to student complacency, encouraged procrastination, or other suboptimal student behavior because of the extra chance provided. We found that the majority of students reported that they worked hard on their initial test attempts even when second-chance testing was available. 
    more » « less
    Free, publicly-accessible full text available June 26, 2024
  2. In this full research paper, we examine various grading policies for second-chance testing. Second-chance testing refers to giving students the opportunity to take a second version of a test for some form of grade replacement. Second-chance testing as a pedagogical strategy bears some similarities to mastery learning, but second-chance testing is less expensive to implement. Previous work has shown that second-chance testing is associated with improved performance, but there is still a lack of clarity regarding the optimal grading policies for this testing strategy. We interviewed seven instructors who use second-chance testing in their courses to collect data on why they chose specific policies. We then conducted structured interviews with some students (N = 11) to capture more nuance about students’ decision making processes under the different grading policies. Afterwards, we conducted a quasi-experimental study to compare two second-chance testing grading policies and determine how they influenced students across multiple dimensions. We varied the grading policies used in two similar sophomore-level engineering courses. We collected assessment data and administered a survey that queried students (N = 513) about their behavior and reactions to both grading policies. Surprisingly, we found that the students’ preference between these two policies were almost perfectly split. We conclude that there are likely many policies that perform well by being simple and encouraging serious attempts on both tests. 
    more » « less
    Free, publicly-accessible full text available June 26, 2024
  3. We conducted an across-semester quasi-experimental study that compared students' outcomes under frequent and infrequent testing regimens in an introductory computer science course. Students in the frequent testing (4 quizzes and 4 exams) semester outperformed the infrequent testing (1 midterm and 1 final exam) semester by 9.1 to 13.5 percentage points on code writing questions. We complement these performance results with additional data from surveys, interviews, and analysis of textbook behavior. In the surveys, students report a preference for the smaller number of exams, but rated the exams in the frequent testing semester to be both less difficult and less stressful, in spite of the exams containing identical content. In the interviews, students predominantly indicated (1) that the frequent testing regimen encourages better study habits (e.g., more attention to work, less cramming) and leads to better learning, (2) that frequent testing reduces test anxiety, and (3) that the frequent testing regimen was more fair, but these opinions were not universally held. The students' impressions that the frequent testing regimen would lead to better study habits is borne out in our analysis of students' activities in the course's interactive textbook. In the frequent testing semester, students spent more time on textbook readings and appeared to answer textbook questions more earnestly (i.e., less "gaming the system'' by using hints and brute force). 
    more » « less
  4. Resistive random-access memory (RRAM) devices have been widely studied for neuromorphic, in-memory computing. One of the most studied RRAM structures consists of a titanium capping layer and a HfOx adaptive oxide. Although these devices show promise in improving neuromorphic circuits, high variability, non-linearity, and asymmetric resistance changes limit their usefulness. Many studies have improved linearity by changing materials in or around the device, the circuitry, or the analog bias conditions. However, the impact of prior biasing conditions on the observed analog resistance change is not well understood. Experimental results in this study demonstrate that prior higher reset voltages used after forming cause a greater resistance change during subsequent identical analog pulsing. A multiphysics finite element model suggests that this greater analog resistance change is due to a higher concentration of oxygen ions stored in the titanium capping layer with increasing magnitude of the reset voltage. This work suggests that local ion concentration variations in the titanium capping layer of just tens of atoms cause significant resistance variation during analog operation.

     
    more » « less
  5. Abstract

    The broadband solar K-corona is linearly polarized due to Thomson scattering. Various strategies have been used to represent coronal polarization. Here, we present a new way to visualize the polarized corona, using observations from the 2023 April 20 total solar eclipse in Australia in support of the Citizen CATE 2024 project. We convert observations in the common four-polarizer orthogonal basis (0°, 45°, 90°, & 135°) to −60°, 0°, and +60° (MZP) polarization, which is homologous toR, G, Bcolor channels. The unique image generated provides some sense of how humans might visualize polarization if we could perceive it in the same way we perceive color.

     
    more » « less
  6. Abstract Background

    Grades in college and university STEM courses are an important determinant of student persistence in STEM fields. Recent studies have used the grade offset/grade penalty method to explore why students have lower grades in STEM courses than their GPAs would predict. The results of these studies are in doubt; however, as they use GPA as a reliable measure of academic performance, which is a disputed assumption. Using a predictive model of student performance, it is possible to produce a more accurate measure of academic performance than the observed GPA and discover if STEM courses are graded more stringently, and under which circumstances.

    Results

    A weighted logistic model of GPA better predicts academic performance than the observed GPA. Using this calibrated GPA it is found that the grade offset method predicts that STEM courses, departments, and programs grade significantly more stringently than non-STEM courses. The average grade difference between STEM and non-STEM course grades and GPAs is around four tenths of a grade point. An exception is general education courses offered by STEM departments, which are graded with the same leniency as non-STEM courses. Grade offset calculations that use the observed GPA systematically underestimate the negative offset in STEM grading relative to calculations that use the calibrated GPA. The calibrated GPA is much more highly correlated with standardized tests such as the ACT (r = 0.49) than the observed GPA is (r = 0.25).

    Conclusion

    Observed GPA is a systematically biased measure of academic performance, and should not be used as a basis for determining the presence of grading inequity. Logistic models of GPA provide a more reliable measure of academic performance. When comparing otherwise academically similar students, we find that STEM students have substantially lower grades and GPAs, and that this is the consequence of harder (more stringent) grading in STEM courses.

     
    more » « less
  7. Dorn, Brian ; Vahrenhold, Jan (Ed.)
    Background and Context Lopez and Lister first presented evidence for a skill hierarchy of code reading, tracing, and writing for introductory programming students. Further support for this hierarchy could help computer science educators sequence course content to best build student programming skill. Objective This study aims to replicate a slightly simplified hierarchy of skills in CS1 using a larger body of students (600+ vs. 38) in a non-major introductory Python course with computer-based exams. We also explore the validity of other possible hierarchies. Method We collected student score data on 4 kinds of exam questions. Structural equation modeling was used to derive the hierarchy for each exam. Findings We find multiple best-fitting structural models. The original hierarchy does not appear among the “best” candidates, but similar models do. We also determined that our methods provide us with correlations between skills and do not answer a more fundamental question: what is the ideal teaching order for these skills? Implications This modeling work is valuable for understanding the possible correlations between fundamental code-related skills. However, analyzing student performance on these skills at a moment in time is not sufficient to determine teaching order. We present possible study designs for exploring this more actionable research question. 
    more » « less
  8. Abstract

    During thequadrature period(2010 December–2011 August) the STEREO-A and B satellites were approximately at right angles to the SOHO satellite. This alignment was particularly advantageous for determining the coronal mass ejection (CME) properties, since the closer a CME propagates to the plane of sky, the smaller the measurement inaccuracies are. Our primary goal was to study dimmings and their relationship to CMEs and flares during this time. We identified 53 coronal dimmings using STEREO/EUVI 195 Å observations, and linked 42 of the dimmings to CMEs (observed with SOHO/LASCO/C2) and 23 to flares. Each dimming in the catalog was processed with the Coronal Dimming Tracker which detects transient dark regions in extreme ultraviolet images directly, without the use of difference images. This approach allowed us to observefootpoint dimmings: the regions of mass depletion at the footpoints of erupting magnetic flux rope structures. Our results show that the CME mass has a linear, moderate correlation with dimming total EUV intensity change, and a monotonic, moderate correlation with dimming area. These results suggest that the more the dimming intensity drops and the larger the erupting region is, the more plasma is evacuated. We also found a strong correlation between the flare duration and the total change in EUV intensity. The correlation between dimming properties showed that larger dimmings tend to be brighter; they go through more intensity loss and generally live longer—supporting the hypothesis that larger transient open regions release more plasma and take longer to close down and refill with plasma.

     
    more » « less