This full research paper documents assessment definitions from engineering faculty members, mainly from Research 1 universities. Assessments are essential components of the engineering learning environment, and how engineering faculty make decisions about assessments in their classroom is a relatively understudied topic in engineering education research. Exploring how engineering faculty think and implement assessments through the mental model framework can help address this research gap. The research documented in this paper focuses on analyzing data from an informational questionnaire that is part of a larger study to understand how the participants define assessments through methods inspired by mixed method strategies. These strategies include descriptive statistics on demographic findings and Natural Language Processing (NLP) and coding on the open-ended response question asking the participants to define assessments, which yielded cluster themes that characterize the definitions. Findings show that while many participants defined assessments in relation to measuring student learning, other substantial aspects include benchmarking, assessing student ability and competence, and formal evaluation for quality. These findings serve as foundational knowledge toward deeper exploration and understanding of assessment mental models of engineering faculty that can begin to address the aforementioned research gap on faculty assessment decisions in classrooms.
more »
« less
WIP: Faculty Use of Metaphors When Discussing Assessment
This Work-in-Progress paper studies the mental models of engineering faculty regarding
assessment, focusing on their use of metaphors. Assessments are crucial components in courses as they serve various purposes in the learning and teaching process, such as gauging student learning, evaluating instructors and course design, and documenting learning for accountability. Thus, when it comes to faculty development on teaching, assessments should consistently be considered while discussing pedagogical improvements. To contribute to faculty development research, our study illuminates several metaphors engineering faculty use to discuss assessment concepts and knowledge. This paper helps to answer the research question: which metaphors do faculty use when talking about assessment in their classrooms? Through interviews grounded in mental model theory, six metaphors emerged: (1) cooking, (2) playing golf, (3) driving a car, (4) coaching football, (5) blood tests, (6) and generically playing a sport or an instrument. Two important takeaways stemmed from the analysis. First, these metaphors were experiences commonly portrayed in the culture in which the study took place. This is important to note for someone working in faculty development as these metaphors may create communication challenges. Second, the mental model approach showed potential in eliciting ways engineering faculty describe and discuss assessments, offering opportunities for future research and practice in faculty development. The lightning talk will present further details on the findings.
more »
« less
- Award ID(s):
- 2113631
- PAR ID:
- 10432189
- Date Published:
- Journal Name:
- Proceedings of the American Society for Engineering Education
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This full research paper documents assessment definitions from engineering faculty members, mainly from Research 1 universities. Assessments are essential components of the engineering learning environment, and how engineering faculty make decisions about assessments in their classroom is a relatively understudied topic in engineering education research. Exploring how engineering faculty think and implement assessments through the mental model framework can help address this research gap. The research documented in this paper focuses on analyzing data from an informational questionnaire that is part of a larger study to understand how the participants define assessments through methods inspired by mixed method strategies. These strategies include descriptive statistics on demographic findings and Natural Language Processing (NLP) and coding on the open-ended response question asking the participants to define assessments, which yielded cluster themes that characterize the definitions. Findings show that while many participants defined assessments in relation to measuring student learning, other substantial aspects include benchmarking, assessing student ability and competence, and formal evaluation for quality. These findings serve as foundational knowledge toward deeper exploration and understanding of assessment mental models of engineering faculty that can begin to address the aforementioned research gap on faculty assessment decisions in classrooms.more » « less
-
In this full research paper, we discuss the benefits and challenges of using GPT-4 to perform qualitative analysis to identify faculty’s mental models of assessment. Assessments play an important role in engineering education. They are used to evaluate student learning, measure progress, and identify areas for improvement. However, how faculty members approach assessments can vary based on several factors, including their own mental models of assessment. To understand the variation in these mental models, we conducted interviews with faculty members in various engineering disciplines at universities across the United States. Data was collected from 28 participants from 18 different universities. The interviews consisted of questions designed to elicit information related to the pieces of mental models (state, form, function, and purpose) of assessments of students in their classrooms. For this paper, we analyzed interviews to identify the entities and entity relationships in participant statements using natural language processing and GPT-4 as our language model. We then created a graphical representation to characterize and compare individuals’ mental models of assessment using GraphViz. We asked the model to extract entities and their relationships from interview excerpts, using GPT-4 and instructional prompts. We then compared the results of GPT-4 from a small portion of our data to entities and relationships that were extracted manually by one of our researchers. We found that both methods identified overlapping entity relationships but also discovered entities and relationships not identified by the other model. The GPT-4 model tended to identify more basic relationships, while manual analysis identified more nuanced relationships. Our results do not currently support using GPT-4 to automatically generate graphical representations of faculty’s mental models of assessments. However, using a human-in-the-loop process could help offset GPT-4’s limitations. In this paper, we will discuss plans for our future work to improve upon GPT-4’s current performance.more » « less
-
In this full research paper, we discuss the benefits and challenges of using GPT-4 to perform qualitative analysis to identify faculty’s mental models of assessment. Assessments play an important role in engineering education. They are used to evaluate student learning, measure progress, and identify areas for improvement. However, how faculty members approach assessments can vary based on several factors, including their own mental models of assessment. To understand the variation in these mental models, we conducted interviews with faculty members in various engineering disciplines at universities across the United States. Data was collected from 28 participants from 18 different universities. The interviews consisted of questions designed to elicit information related to the pieces of mental models (state, form, function, and purpose) of assessments of students in their classrooms. For this paper, we analyzed interviews to identify the entities and entity relationships in participant statements using natural language processing and GPT-4 as our language model. We then created a graphical representation to characterize and compare individuals’ mental models of assessment using GraphViz. We asked the model to extract entities and their relationships from interview excerpts, using GPT-4 and instructional prompts. We then compared the results of GPT-4 from a small portion of our data to entities and relationships that were extracted manually by one of our researchers. We found that both methods identified overlapping entity relationships but also discovered entities and relationships not identified by the other model. The GPT-4 model tended to identify more basic relationships, while manual analysis identified more nuanced relationships. Our results do not currently support using GPT-4 to automatically generate graphical representations of faculty’s mental models of assessments. However, using a human-in-the-loop process could help offset GPT-4’s limitations. In this paper, we will discuss plans for our future work to improve upon GPT-4’s current performance.more » « less
-
The purpose of this NSF grantees poster is to disseminate initial findings on faculty perception of mastery-based assessment in a project-based engineering program as part of an NSF Broadening Participation award. It is understood that pedagogical approaches influence more than what students learn but also impact their mindsets, motivation, and how they see themselves as engineers. Mastery-based teaching has seen growing popularity in engineering education as faculty strive to support students in achieving learning outcomes linked with continuous improvement to promote performance and persistence. However, this teaching approach has specific challenges as it requires significant restructuring of assessment practices including assignments, exams, evaluation processes, and grading. This work seeks to better understand faculty perspectives of assessment within mastery-based teaching to support a user-oriented perspective that can help other engineering faculty navigate the challenges of using evidence-based teaching practices in their own classrooms. This paper focuses on qualitative findings from an initial pilot study from a larger, NSF-funded Broadening Participation project at a small, Eastern private college. This exploratory pilot study includes the perceptions of two engineering faculty members using mastery teaching and assessment in a project-based engineering program. A semi-structured interview with multiple open-ended questions was used to prompt participants to share their experiences with assessment in relation to their self-efficacy around teaching and their perceptions of assessment in relation to their students’ learning, confidence, and agency. Directed content and thematic analysis were used to identify codes and develop themes in relation to how participants described certain features of assessment in their engineering program. Preliminary results will illustrate features of mastery assessment that faculty highlighted as particularly challenging or successful and related lessons learned. The initial themes and patterns identified in this preliminary pilot study will be used to set up a more focused secondary full data collection phase in the larger study. Additionally, this poster serves as an opportunity to initiate important dialogue around the implementation of mastery-based assessment and project-based learning in engineering programs and to better support engineering faculty in incorporating elements of mastery-based teaching and assessment.more » « less