Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available June 28, 2025
-
In this full research paper, we discuss the benefits and challenges of using GPT-4 to perform qualitative analysis to identify faculty’s mental models of assessment. Assessments play an important role in engineering education. They are used to evaluate student learning, measure progress, and identify areas for improvement. However, how faculty members approach assessments can vary based on several factors, including their own mental models of assessment. To understand the variation in these mental models, we conducted interviews with faculty members in various engineering disciplines at universities across the United States. Data was collected from 28 participants from 18 different universities. The interviews consisted of questions designed to elicit information related to the pieces of mental models (state, form, function, and purpose) of assessments of students in their classrooms. For this paper, we analyzed interviews to identify the entities and entity relationships in participant statements using natural language processing and GPT-4 as our language model. We then created a graphical representation to characterize and compare individuals’ mental models of assessment using GraphViz. We asked the model to extract entities and their relationships from interview excerpts, using GPT-4 and instructional prompts. We then compared the results of GPT-4 from a small portion of our data to entities and relationships that were extracted manually by one of our researchers. We found that both methods identified overlapping entity relationships but also discovered entities and relationships not identified by the other model. The GPT-4 model tended to identify more basic relationships, while manual analysis identified more nuanced relationships. Our results do not currently support using GPT-4 to automatically generate graphical representations of faculty’s mental models of assessments. However, using a human-in-the-loop process could help offset GPT-4’s limitations. In this paper, we will discuss plans for our future work to improve upon GPT-4’s current performance.more » « lessFree, publicly-accessible full text available July 3, 2025
-
Free, publicly-accessible full text available June 28, 2025
-
Free, publicly-accessible full text available June 28, 2025
-
In this full research paper, we discuss the benefits and challenges of using GPT-4 to perform qualitative analysis to identify faculty’s mental models of assessment. Assessments play an important role in engineering education. They are used to evaluate student learning, measure progress, and identify areas for improvement. However, how faculty members approach assessments can vary based on several factors, including their own mental models of assessment. To understand the variation in these mental models, we conducted interviews with faculty members in various engineering disciplines at universities across the United States. Data was collected from 28 participants from 18 different universities. The interviews consisted of questions designed to elicit information related to the pieces of mental models (state, form, function, and purpose) of assessments of students in their classrooms. For this paper, we analyzed interviews to identify the entities and entity relationships in participant statements using natural language processing and GPT-4 as our language model. We then created a graphical representation to characterize and compare individuals’ mental models of assessment using GraphViz. We asked the model to extract entities and their relationships from interview excerpts, using GPT-4 and instructional prompts. We then compared the results of GPT-4 from a small portion of our data to entities and relationships that were extracted manually by one of our researchers. We found that both methods identified overlapping entity relationships but also discovered entities and relationships not identified by the other model. The GPT-4 model tended to identify more basic relationships, while manual analysis identified more nuanced relationships. Our results do not currently support using GPT-4 to automatically generate graphical representations of faculty’s mental models of assessments. However, using a human-in-the-loop process could help offset GPT-4’s limitations. In this paper, we will discuss plans for our future work to improve upon GPT-4’s current performance.more » « lessFree, publicly-accessible full text available June 26, 2025
-
Understanding how engineers connect technical work to broader social-ecological systems is critical because their designs transform societies and environments. As part of a national study to explore how civil and chemical engineers navigate design decisions, we are developing a survey instrument to assess mental models of social-ecological-technical systems (SETS). Mental models (Johnson-Laird, 2001; Rouse & Morris, 1986), are internal representations that individuals use to describe, explain, and predict the form, function, state, and purpose of a system. In this case, the system is the connection between technical design and broader social-ecological systems. The project is informed by three frameworks: 1) planned behavior, 2) mental models, and 3) social-ecological-technical systems (SETS). The project integrates the theory of planned behavior with mental models to build fundamental knowledge of engineers’ mental models of SETSs, changes in their mental models over time, and relationships between mental models and design decisions. This paper presents the instrument development process centered on eliciting mental models of SETS. SETS (McPhearson et al., 2022) is a generalized framework that positions social, technical, and ecological elements of a system as vertices of a triangle, with interactions in all directions. The instrument will include both closed-ended and open-ended items, allowing us to leverage advances in natural language processing to scale qualitative data analysis and combine an inferential framework often associated with quantitative studies with the richer information flow associated with qualitative studies. Previous work using SETS has identified individual components within each vertex salient to the specific context (Bixler et al., 2019). In this paper, we report on the phases of instrument development that support this contextualization: 1) Initial interview protocol development followed by semi-structured interviews with six engineering students outside the target majors to test how well the protocol elicits information about students mental models of SETS, 2) revisions to the interview protocol followed by semi-structured interviews with senior-level students in chemical and civil engineering students (12 per discipline), 3) deductive and inductive analysis of those interviews, using SETS as our deductive coding scheme followed by inductive coding to refine and contextualize the analysis and support survey development. We conclude with the initial survey instrument, which will undergo pilot testing in the summer of 2024. The results both support instrument development and offer an exploratory analysis of civil and chemical engineering students’ mental models of SETS.more » « lessFree, publicly-accessible full text available June 25, 2025
-
The emergence of generative artificial intelligence (GAI) has started to introduce a fundamental reexamination of established teaching methods. These GAI systems offer a chance for both educators and students to reevaluate their academic endeavors. Reevaluation of current practices is particularly pertinent in assessment within engineering instruction, where advanced generative text algorithms are proficient in addressing intricate challenges like those found in engineering courses. While this juncture presents a moment to revisit general assessment methods, the actual response of faculty to the incorporation of GAI in their evaluative techniques remains unclear. To investigate this, we have initiated a study delving into the mental constructs that engineering faculty hold about evaluation, focusing on their evolving attitudes and responses to GAI, as reported in the Fall of 2023. Adopting a long-term data-gathering strategy, we conducted a series of surveys, interviews, and recordings targeting the evaluative decision-making processes of a varied group of engineering educators across the United States. This paper presents the data collection process, our participants’ demographics, our data analysis plan, and initial findings based on the participants’ backgrounds, followed by our future work and potential implications. The analysis of the collected data will utilize qualitative thematic analysis in the next step of our study. Once we complete our study, we believe our findings will sketch the early stages of this emerging paradigm shift in the assessment of undergraduate engineering education, offering a novel perspective on the discourse surrounding evaluation strategies in the field. These insights are vital for stakeholders such as policymakers, educational leaders, and instructors, as they have significant ramifications for policy development, curriculum planning, and the broader dialogue on integrating GAI into educational evaluation.more » « lessFree, publicly-accessible full text available June 26, 2025