Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            We explore the possibility of using natural language processing (NLP) and generative artificial intelligence (GAI) to streamline the process of thematic analysis (TA) for qualitative research. We followed traditional TA phases to demonstrate areas of alignment and discordance between (a) steps one might take with NLP and GAI and (b) traditional thematic analysis. Using a case study, we illustrate the application of this workflow to a real-world dataset. We start with processes involved in data analysis and translate those into analogous steps in a workflow that uses NLP and GAI. We then discuss the potential benefits and limitations of these NLP and GAI techniques, highlighting points of convergence and divergence with thematic analysis. Then, we highlight the importance of the central role of researchers during the process of NLP and GAI-assisted thematic analysis. Finally, we conclude with a discussion of the implications of this approach for qualitative research and suggestions for future work. Researchers who are interested in AI-assisted methods can benefit from the roadmap we provide in this study to understand the current landscape of NLP and GAI models for qualitative research.more » « lessFree, publicly-accessible full text available April 1, 2026
- 
            In this full research paper, we discuss the benefits and challenges of using GPT-4 to perform qualitative analysis to identify faculty’s mental models of assessment. Assessments play an important role in engineering education. They are used to evaluate student learning, measure progress, and identify areas for improvement. However, how faculty members approach assessments can vary based on several factors, including their own mental models of assessment. To understand the variation in these mental models, we conducted interviews with faculty members in various engineering disciplines at universities across the United States. Data was collected from 28 participants from 18 different universities. The interviews consisted of questions designed to elicit information related to the pieces of mental models (state, form, function, and purpose) of assessments of students in their classrooms. For this paper, we analyzed interviews to identify the entities and entity relationships in participant statements using natural language processing and GPT-4 as our language model. We then created a graphical representation to characterize and compare individuals’ mental models of assessment using GraphViz. We asked the model to extract entities and their relationships from interview excerpts, using GPT-4 and instructional prompts. We then compared the results of GPT-4 from a small portion of our data to entities and relationships that were extracted manually by one of our researchers. We found that both methods identified overlapping entity relationships but also discovered entities and relationships not identified by the other model. The GPT-4 model tended to identify more basic relationships, while manual analysis identified more nuanced relationships. Our results do not currently support using GPT-4 to automatically generate graphical representations of faculty’s mental models of assessments. However, using a human-in-the-loop process could help offset GPT-4’s limitations. In this paper, we will discuss plans for our future work to improve upon GPT-4’s current performance.more » « less
- 
            The emergence of generative artificial intelligence (GAI) has started to introduce a fundamental reexamination of established teaching methods. These GAI systems offer a chance for both educators and students to reevaluate their academic endeavors. Reevaluation of current practices is particularly pertinent in assessment within engineering instruction, where advanced generative text algorithms are proficient in addressing intricate challenges like those found in engineering courses. While this juncture presents a moment to revisit general assessment methods, the actual response of faculty to the incorporation of GAI in their evaluative techniques remains unclear. To investigate this, we have initiated a study delving into the mental constructs that engineering faculty hold about evaluation, focusing on their evolving attitudes and responses to GAI, as reported in the Fall of 2023. Adopting a long-term data-gathering strategy, we conducted a series of surveys, interviews, and recordings targeting the evaluative decision-making processes of a varied group of engineering educators across the United States. This paper presents the data collection process, our participants’ demographics, our data analysis plan, and initial findings based on the participants’ backgrounds, followed by our future work and potential implications. The analysis of the collected data will utilize qualitative thematic analysis in the next step of our study. Once we complete our study, we believe our findings will sketch the early stages of this emerging paradigm shift in the assessment of undergraduate engineering education, offering a novel perspective on the discourse surrounding evaluation strategies in the field. These insights are vital for stakeholders such as policymakers, educational leaders, and instructors, as they have significant ramifications for policy development, curriculum planning, and the broader dialogue on integrating GAI into educational evaluation.more » « less
- 
            In this full research paper, we discuss the benefits and challenges of using GPT-4 to perform qualitative analysis to identify faculty’s mental models of assessment. Assessments play an important role in engineering education. They are used to evaluate student learning, measure progress, and identify areas for improvement. However, how faculty members approach assessments can vary based on several factors, including their own mental models of assessment. To understand the variation in these mental models, we conducted interviews with faculty members in various engineering disciplines at universities across the United States. Data was collected from 28 participants from 18 different universities. The interviews consisted of questions designed to elicit information related to the pieces of mental models (state, form, function, and purpose) of assessments of students in their classrooms. For this paper, we analyzed interviews to identify the entities and entity relationships in participant statements using natural language processing and GPT-4 as our language model. We then created a graphical representation to characterize and compare individuals’ mental models of assessment using GraphViz. We asked the model to extract entities and their relationships from interview excerpts, using GPT-4 and instructional prompts. We then compared the results of GPT-4 from a small portion of our data to entities and relationships that were extracted manually by one of our researchers. We found that both methods identified overlapping entity relationships but also discovered entities and relationships not identified by the other model. The GPT-4 model tended to identify more basic relationships, while manual analysis identified more nuanced relationships. Our results do not currently support using GPT-4 to automatically generate graphical representations of faculty’s mental models of assessments. However, using a human-in-the-loop process could help offset GPT-4’s limitations. In this paper, we will discuss plans for our future work to improve upon GPT-4’s current performance.more » « less
- 
            This Work-in-Progress paper studies the mental models of engineering faculty regarding assessment, focusing on their use of metaphors. Assessments are crucial components in courses as they serve various purposes in the learning and teaching process, such as gauging student learning, evaluating instructors and course design, and documenting learning for accountability. Thus, when it comes to faculty development on teaching, assessments should consistently be considered while discussing pedagogical improvements. To contribute to faculty development research, our study illuminates several metaphors engineering faculty use to discuss assessment concepts and knowledge. This paper helps to answer the research question: which metaphors do faculty use when talking about assessment in their classrooms? Through interviews grounded in mental model theory, six metaphors emerged: (1) cooking, (2) playing golf, (3) driving a car, (4) coaching football, (5) blood tests, (6) and generically playing a sport or an instrument. Two important takeaways stemmed from the analysis. First, these metaphors were experiences commonly portrayed in the culture in which the study took place. This is important to note for someone working in faculty development as these metaphors may create communication challenges. Second, the mental model approach showed potential in eliciting ways engineering faculty describe and discuss assessments, offering opportunities for future research and practice in faculty development. The lightning talk will present further details on the findings.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available