- Award ID(s):
- 1757207
- NSF-PAR ID:
- 10462531
- Date Published:
- Journal Name:
- Lecture notes in computer science
- ISSN:
- 0302-9743
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Engineering Design (ED) challenges are increasingly used as a context to learn science. Research shows that there is a need for strategies that facilitate learners to identify, apply, and reflect on ways scientific principles can inform creation and evaluation of ED solutions. We investigate the use of contrasting cases and argumentation scaffolds to facilitate use of evidence-based reasoning in a CAD supported ED tasks. Elementary education majors in a physics course analyzed solutions to an ED problem in two conditions: 1) identify similarities and differences, 2) evaluate and produce an argument for a “good” design solution. We found that the argumentation condition used scientific evidence-based reasoning significantly more frequently in their responses than the control. Results indicate that the contrasting cases with argumentation scaffolds shows promise in facilitating students’ use of evidence-based reasoning in their ED tasks.more » « less
-
Abstract Argumentation is fundamental to science education, both as a prominent feature of scientific reasoning and as an effective mode of learning—a perspective reflected in contemporary frameworks and standards. The successful implementation of argumentation in school science, however, requires a paradigm shift in science assessment from the measurement of knowledge and understanding to the measurement of performance and knowledge in use. Performance tasks requiring argumentation must capture the many ways students can construct and evaluate arguments in science, yet such tasks are both expensive and resource‐intensive to score. In this study we explore how machine learning text classification techniques can be applied to develop efficient, valid, and accurate constructed‐response measures of students' competency with written scientific argumentation that are aligned with a validated argumentation learning progression. Data come from 933 middle school students in the San Francisco Bay Area and are based on three sets of argumentation items in three different science contexts. The findings demonstrate that we have been able to develop computer scoring models that can achieve substantial to almost perfect agreement between human‐assigned and computer‐predicted scores. Model performance was slightly weaker for harder items targeting higher levels of the learning progression, largely due to the linguistic complexity of these responses and the sparsity of higher‐level responses in the training data set. Comparing the efficacy of different scoring approaches revealed that breaking down students' arguments into multiple components (e.g., the presence of an accurate claim or providing sufficient evidence), developing computer models for each component, and combining scores from these analytic components into a holistic score produced better results than holistic scoring approaches. However, this analytical approach was found to be differentially biased when scoring responses from English learners (EL) students as compared to responses from non‐EL students on some items. Differences in the severity between human and computer scores for EL between these approaches are explored, and potential sources of bias in automated scoring are discussed.
-
Abstract For students to meaningfully engage in science practices, substantive changes need to occur to deeply entrenched instructional approaches, particularly those related to classroom discourse. Because teachers are critical in establishing how students are permitted to interact in the classroom, it is imperative to examine their role in fostering learning environments in which students carry out science practices. This study explores how teachers describe, or frame, expectations for classroom discussions pertaining to the science practice of argumentation. Specifically, we use the theoretical lens of a participation framework to examine how teachers emphasize particular actions and goals for their students' argumentation. Multiple‐case study methodology was used to explore the relationship between two middle school teachers' framing for argumentation, and their students' engagement in an argumentation discussion. Findings revealed that, through talk moves and physical actions, both teachers emphasized the importance of students driving the argumentation and interacting with peers, resulting in students engaging in various types of dialogic interactions. However, variation in the two teachers' language highlighted different purposes for students to do so. One teacher explained that through these interactions, students could learn from peers, which could result in each individual student revising their original argument. The other teacher articulated that by working with peers and sharing ideas, classroom members would develop a communal understanding. These distinct goals aligned with different patterns in students' argumentation discussion, particularly in relation to students building on each other's ideas, which occurred more frequently in the classroom focused on communal understanding. The findings suggest the need to continue supporting teachers in developing and using rich instructional strategies to help students with dialogic interactions related to argumentation. This work also sheds light on the importance of how teachers frame the goals for student engagement in this science practice.
-
Ricca, Francesco et (Ed.)The integration of low-level perception with high-level reasoning is one of the oldest problems in Artificial Intelligence. Today, the topic is revisited with the recent rise of deep neural networks. However, it is still not clear how complex and high-level reasoning, such as default reasoning, ontology reasoning, and causal reasoning, can be successfully computed by these approaches. The latter subject has been well-studied in the area of knowledge representation (KR), but many KR formalisms, including answer set programming (ASP), are logic-oriented and do not incorporate high-dimensional feature space as in deep learning, which limits the applicability of KR in many practical applications.more » « less
-
The Next Generation Science Standards [1] recognized evidence-based argumentation as one of the essential skills for students to develop throughout their science and engineering education. Argumentation focuses students on the need for quality evidence, which helps to develop their deep understanding of content [2]. Argumentation has been studied extensively, both in mathematics and science education but also to some extent in engineering education (see for example [3], [4], [5], [6]). After a thorough search of the literature, we found few studies that have considered how teachers support collective argumentation during engineering learning activities. The purpose of this program of research was to support teachers in viewing argumentation as an important way to promote critical thinking and to provide teachers with tools to implement argumentation in their lessons integrating coding into science, technology, engineering, and mathematics (which we refer to as integrative STEM). We applied a framework developed for secondary mathematics [7] to understand how teachers support collective argumentation in integrative STEM lessons. This framework used Toulmin’s [8] conceptualization of argumentation, which includes three core components of arguments: a claim (or hypothesis) that is based on data (or evidence) accompanied by a warrant (or reasoning) that relates the data to the claim [9], [8]. To adapt the framework, video data were coded using previously established methods for analyzing argumentation [7]. In this paper, we consider how the framework can be applied to an elementary school teacher’s classroom interactions and present examples of how the teacher implements various questioning strategies to facilitate more productive argumentation and deeper student engagement. We aim to understand the nature of the teacher’s support for argumentation—contributions and actions from the teacher that prompt or respond to parts of arguments. In particular, we look at examples of how the teacher supports students to move beyond unstructured tinkering (e.g., trial-and-error) to think logically about coding and develop reasoning for the choices that they make in programming. We also look at the components of arguments that students provide, with and without teacher support. Through the use of the framework, we are able to articulate important aspects of collective argumentation that would otherwise be in the background. The framework gives both eyes to see and language to describe how teachers support collective argumentation in integrative STEM classrooms.more » « less