skip to main content


Title: The many faces of scientific inquiry: Effectively measuring what students do and not only what they say
Abstract

Science education frameworks in the United States have moved strongly in recent years to incorporate more dimensions of learning, including measuring student use of scientific practices employed during scientific inquiry. For instance, the Next Generation Science Standards and related multidimensional frameworks adopted or adapted recently by more than 30 United States include numerous complex science performance skills required of students. This article considers whether valid and reliable evidence can be obtained in online performance tasks to yield an estimate of both student inquiry practices and of the ability of students to explain their understanding of scientific concepts. A data set from a Virtual Performance Assessment (VPA) task,There's a New Frog in Town, is examined. Delivered through an online system, the VPA task engages students in guided inquiry through problem solving, modeling, and exploration. The VPAs are designed to produce evidence on more than one latent trait in the respondent performance. Results of the case study reported here indicated that maps of student proficiency in scientific inquiry were possible to generate from the VPA data set, using measurement models. Addition of process data through a new hybrid measurement model, mIRT‐Bayes, improved reliability of results. Results indicated overall that virtual performance tasks may be helpful for science assessment, especially if assessment time is short and a goal is to increase the validity and quality of performance measures with authentic and engaging virtual activities.

 
more » « less
NSF-PAR ID:
10074289
Author(s) / Creator(s):
 ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
Journal of Research in Science Teaching
Volume:
55
Issue:
10
ISSN:
0022-4308
Page Range / eLocation ID:
p. 1469-1496
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Science fairs have a long history in the United States and internationally. Their implementation varies greatly (Kook et al, 2020), yet few empirical studies have examined the outcomes of these experiences for student learning. Research indicates that authentic scientific inquiry that focus on students' agency in investigations can contribute to students learning (e.g., Houseal, Abd‐El‐Khalick, and Destefano, 2014). However, teachers have been challenged with implementing inquiry‐based investigations (e.g., Anderson, 2007; Harris & Rooks, 2010). As new science standards increase the demand for science investigations in classrooms that afford students opportunities to engage with science and engineering practices (SEPs; NGSS Lead States, 2013), research is needed to understand the role of teachers and how these experiences can contribute to student learning. In this article, we describe the results of a national study that included data from 21 middle school science fairs. Data included observations of 20 science fairs, pre and postscience fair assessment data from 343 sixth grade students, and interviews or focus groups with 131 students, 122 teachers, 16 administrators, and 29 science fair judges. These data enabled the exploration of features of science fairs, including opportunities for students to engage in SEPs and the teachers support for SEPs through the science fair investigations. Findings reveal that science fair implementation varies considerably across schools. HLM analysis indicates that teachers' support for critiquing practices, particularly when it included students' engagement in evaluating the work of their peers, are positively associated with students understandings of SEPs. Qualitative findings highlight the ways in which teachers structured students' experiences and supported their enactment of SEPs as they conducted their science fair investigations.

     
    more » « less
  2. This paper examines how practicing teachers approach and evaluate students’ critical thinking processes in science, using the implementation of an online, inquiry-based investigation in middle school classrooms as the context for teachers’ observations. Feedback and ratings from three samples of science teachers were analysed to determine how they valued and evaluated component processes of students’ critical thinking and how such processes were related to their instructional approaches and student outcomes. Drawing from an integrated view of teacher practice, results suggested that practicing science teachers readily observed and valued critical thinking processes that aligned to goal intentions focused on domain content and successful student thinking. These processes often manifested as components of effective scientific reasoning—for example, gathering evidence, analysing data, evaluating ideas, and developing strong arguments. However, teachers also expressed avoidance intentions related to student confusion and uncertainty before and after inquiry-based investigations designed for critical thinking. These findings highlight a potential disconnect between the benefits of productive student struggle for critical thinking as endorsed in the research on learning and science education and the meaning that teachers ascribe to such struggle as they seek to align their instructional practices to classroom challenges. 
    more » « less
  3. Abstract

    Argumentation is fundamental to science education, both as a prominent feature of scientific reasoning and as an effective mode of learning—a perspective reflected in contemporary frameworks and standards. The successful implementation of argumentation in school science, however, requires a paradigm shift in science assessment from the measurement of knowledge and understanding to the measurement of performance and knowledge in use. Performance tasks requiring argumentation must capture the many ways students can construct and evaluate arguments in science, yet such tasks are both expensive and resource‐intensive to score. In this study we explore how machine learning text classification techniques can be applied to develop efficient, valid, and accurate constructed‐response measures of students' competency with written scientific argumentation that are aligned with a validated argumentation learning progression. Data come from 933 middle school students in the San Francisco Bay Area and are based on three sets of argumentation items in three different science contexts. The findings demonstrate that we have been able to develop computer scoring models that can achieve substantial to almost perfect agreement between human‐assigned and computer‐predicted scores. Model performance was slightly weaker for harder items targeting higher levels of the learning progression, largely due to the linguistic complexity of these responses and the sparsity of higher‐level responses in the training data set. Comparing the efficacy of different scoring approaches revealed that breaking down students' arguments into multiple components (e.g., the presence of an accurate claim or providing sufficient evidence), developing computer models for each component, and combining scores from these analytic components into a holistic score produced better results than holistic scoring approaches. However, this analytical approach was found to be differentially biased when scoring responses from English learners (EL) students as compared to responses from non‐EL students on some items. Differences in the severity between human and computer scores for EL between these approaches are explored, and potential sources of bias in automated scoring are discussed.

     
    more » « less
  4. Abstract: 100 words Jurors are increasingly exposed to scientific information in the courtroom. To determine whether providing jurors with gist information would assist in their ability to make well-informed decisions, the present experiment utilized a Fuzzy Trace Theory-inspired intervention and tested it against traditional legal safeguards (i.e., judge instructions) by varying the scientific quality of the evidence. The results indicate that jurors who viewed high quality evidence rated the scientific evidence significantly higher than those who viewed low quality evidence, but were unable to moderate the credibility of the expert witness and apply damages appropriately resulting in poor calibration. Summary: <1000 words Jurors and juries are increasingly exposed to scientific information in the courtroom and it remains unclear when they will base their decisions on a reasonable understanding of the relevant scientific information. Without such knowledge, the ability of jurors and juries to make well-informed decisions may be at risk, increasing chances of unjust outcomes (e.g., false convictions in criminal cases). Therefore, there is a critical need to understand conditions that affect jurors’ and juries’ sensitivity to the qualities of scientific information and to identify safeguards that can assist with scientific calibration in the courtroom. The current project addresses these issues with an ecologically valid experimental paradigm, making it possible to assess causal effects of evidence quality and safeguards as well as the role of a host of individual difference variables that may affect perceptions of testimony by scientific experts as well as liability in a civil case. Our main goal was to develop a simple, theoretically grounded tool to enable triers of fact (individual jurors) with a range of scientific reasoning abilities to appropriately weigh scientific evidence in court. We did so by testing a Fuzzy Trace Theory-inspired intervention in court, and testing it against traditional legal safeguards. Appropriate use of scientific evidence reflects good calibration – which we define as being influenced more by strong scientific information than by weak scientific information. Inappropriate use reflects poor calibration – defined as relative insensitivity to the strength of scientific information. Fuzzy Trace Theory (Reyna & Brainerd, 1995) predicts that techniques for improving calibration can come from presentation of easy-to-interpret, bottom-line “gist” of the information. Our central hypothesis was that laypeople’s appropriate use of scientific information would be moderated both by external situational conditions (e.g., quality of the scientific information itself, a decision aid designed to convey clearly the “gist” of the information) and individual differences among people (e.g., scientific reasoning skills, cognitive reflection tendencies, numeracy, need for cognition, attitudes toward and trust in science). Identifying factors that promote jurors’ appropriate understanding of and reliance on scientific information will contribute to general theories of reasoning based on scientific evidence, while also providing an evidence-based framework for improving the courts’ use of scientific information. All hypotheses were preregistered on the Open Science Framework. Method Participants completed six questionnaires (counterbalanced): Need for Cognition Scale (NCS; 18 items), Cognitive Reflection Test (CRT; 7 items), Abbreviated Numeracy Scale (ABS; 6 items), Scientific Reasoning Scale (SRS; 11 items), Trust in Science (TIS; 29 items), and Attitudes towards Science (ATS; 7 items). Participants then viewed a video depicting a civil trial in which the defendant sought damages from the plaintiff for injuries caused by a fall. The defendant (bar patron) alleged that the plaintiff (bartender) pushed him, causing him to fall and hit his head on the hard floor. Participants were informed at the outset that the defendant was liable; therefore, their task was to determine if the plaintiff should be compensated. Participants were randomly assigned to 1 of 6 experimental conditions: 2 (quality of scientific evidence: high vs. low) x 3 (safeguard to improve calibration: gist information, no-gist information [control], jury instructions). An expert witness (neuroscientist) hired by the court testified regarding the scientific strength of fMRI data (high [90 to 10 signal-to-noise ratio] vs. low [50 to 50 signal-to-noise ratio]) and gist or no-gist information both verbally (i.e., fairly high/about average) and visually (i.e., a graph). After viewing the video, participants were asked if they would like to award damages. If they indicated yes, they were asked to enter a dollar amount. Participants then completed the Positive and Negative Affect Schedule-Modified Short Form (PANAS-MSF; 16 items), expert Witness Credibility Scale (WCS; 20 items), Witness Credibility and Influence on damages for each witness, manipulation check questions, Understanding Scientific Testimony (UST; 10 items), and 3 additional measures were collected, but are beyond the scope of the current investigation. Finally, participants completed demographic questions, including questions about their scientific background and experience. The study was completed via Qualtrics, with participation from students (online vs. in-lab), MTurkers, and non-student community members. After removing those who failed attention check questions, 469 participants remained (243 men, 224 women, 2 did not specify gender) from a variety of racial and ethnic backgrounds (70.2% White, non-Hispanic). Results and Discussion There were three primary outcomes: quality of the scientific evidence, expert credibility (WCS), and damages. During initial analyses, each dependent variable was submitted to a separate 3 Gist Safeguard (safeguard, no safeguard, judge instructions) x 2 Scientific Quality (high, low) Analysis of Variance (ANOVA). Consistent with hypotheses, there was a significant main effect of scientific quality on strength of evidence, F(1, 463)=5.099, p=.024; participants who viewed the high quality evidence rated the scientific evidence significantly higher (M= 7.44) than those who viewed the low quality evidence (M=7.06). There were no significant main effects or interactions for witness credibility, indicating that the expert that provided scientific testimony was seen as equally credible regardless of scientific quality or gist safeguard. Finally, for damages, consistent with hypotheses, there was a marginally significant interaction between Gist Safeguard and Scientific Quality, F(2, 273)=2.916, p=.056. However, post hoc t-tests revealed significantly higher damages were awarded for low (M=11.50) versus high (M=10.51) scientific quality evidence F(1, 273)=3.955, p=.048 in the no gist with judge instructions safeguard condition, which was contrary to hypotheses. The data suggest that the judge instructions alone are reversing the pattern, though nonsignificant, those who received the no gist without judge instructions safeguard awarded higher damages in the high (M=11.34) versus low (M=10.84) scientific quality evidence conditions F(1, 273)=1.059, p=.30. Together, these provide promising initial results indicating that participants were able to effectively differentiate between high and low scientific quality of evidence, though inappropriately utilized the scientific evidence through their inability to discern expert credibility and apply damages, resulting in poor calibration. These results will provide the basis for more sophisticated analyses including higher order interactions with individual differences (e.g., need for cognition) as well as tests of mediation using path analyses. [References omitted but available by request] Learning Objective: Participants will be able to determine whether providing jurors with gist information would assist in their ability to award damages in a civil trial. 
    more » « less
  5. In March 2020, the global COVID-19 pandemic forced universities across the United States to immediately stop face-to-face activities and transition to virtual instruction. While this transition was not easy for anyone, the shift to online learning was especially difficult for STEM courses, particularly engineering, which has a strong practical/laboratory component. Additionally, underrepresented students (URMs) in engineering experienced a range of difficulties during this transition. The purpose of this paper is to highlight underrepresented engineering students’ experiences as a result of COVID-19. In particular, we aim to highlight stories shared by participants who indicated a desire to share their experience with their instructor. In order to better understand these experiences, research participants were asked to share a story, using the novel data collection platform SenseMaker, based on the following prompt: Imagine you are chatting with a friend or family member about the evolving COVID-19 crisis. Tell them about something you have experienced recently as an engineering student. Conducting a SenseMaker study involves four iterative steps: 1) Initiation is the process of designing signifiers, testing, and deploying the instrument; 2) Story Collection is the process of collecting data through narratives; 3) Sense-making is the process of exploring and analyzing patterns of the collection of narratives; and 4) Response is the process of amplifying positive stories and dampening negative stories to nudge the system to an adjacent possible (Van der Merwe et al. 2019). Unlike traditional surveys or other qualitative data collection methods, SenseMaker encourages participants to think more critically about the stories they share by inviting them to make sense of their story using a series of triads and dyads. After completing their narrative, participants were asked a series of triadic, dyadic, and sentiment-based multiple-choice questions (MCQ) relevant to their story. For one MCQ, in particular, participants were required to answer was “If you could do so without fear of judgment or retaliation, who would you share this story with?” and were given the following options: 1) Family 2) Instructor 3) Peers 4) Prefer not to answer 5) Other. A third of the participants indicated that they would share their story with their instructor. Therefore, we further explored this particular question. Additionally, this paper aims to highlight this subset of students whose primary motivation for their actions were based on Necessity. High-level qualitative findings from the data show that students valued Grit and Perseverance, recent experiences influenced their Sense of Purpose, and their decisions were majorly made based on Intuition. Chi-squared tests showed that there were not any significant differences between race and the desire to share with their instructor, however, there were significant differences when factoring in gender suggesting that gender has a large impact on the complexity of navigating school during this time. Lastly, ~50% of participants reported feeling negative or extremely negative about their experiences, ~30% reported feeling neutral, and ~20% reported feeling positive or extremely positive about their experiences. In the study, a total of 500 micro-narratives from underrepresented engineering students were collected from June – July 2020. Undergraduate and graduate students were recruited for participation through the researchers’ personal networks, social media, and through organizations like NSBE. Participants had the option to indicate who is able to read their stories 1) Everyone 2) Researchers Only, or 3) No one. This work presents qualitative stories of those who granted permission for everyone to read. 
    more » « less