skip to main content

Title: Developing a Program to Assist in Qualitative Data Analysis: How Engineering Students’ Discuss Model Types
This Research paper discusses the opportunities that utilizing a computer program can present in analyzing large amounts of qualitative data collected through a survey tool. When working with longitudinal qualitative data, there are many challenges that researchers face. The coding scheme may evolve over time requiring re-coding of early data. There may be long periods of time between data analysis. Typically, multiple researchers will participate in the coding, but this may introduce bias or inconsistencies. Ideally the same researchers would be analyzing the data, but often there is some turnover in the team, particularly when students assist with the coding. Computer programs can enable automated or semi-automated coding helping to reduce errors and inconsistencies in the coded data. In this study, a modeling survey was developed to assess student awareness of model types and administered in four first-year engineering courses across the three universities over the span of three years. The data collected from this survey consists of over 4,000 students’ open-ended responses to three questions about types of models in science, technology, engineering, and mathematics (STEM) fields. A coding scheme was developed to identify and categorize model types in student responses. Over two years, two undergraduate researchers analyzed a more » total of 1,829 students’ survey responses after ensuring intercoder reliability was greater than 80% for each model category. However, with much data remaining to be coded, the research team developed a MATLAB program to automatically implement the coding scheme and identify the types of models students discussed in their responses. MATLAB coded results were compared to human-coded results (n = 1,829) to assess reliability; results matched between 81%-99% for the different model categories. Furthermore, the reliability of the MATLAB coded results are within the range of the interrater reliability measured between the 2 undergraduate researchers (86-100% for the five model categories). With good reliability of the program, all 4,358 survey responses were coded; results showing the number and types of models identified by students are presented in the paper. « less
Authors:
; ; ; ;
Award ID(s):
1827600
Publication Date:
NSF-PAR ID:
10392774
Journal Name:
2022 ASEE Annual Conference
Sponsoring Org:
National Science Foundation
More Like this
  1. This work in progress paper presents an example of conducting a systematic literature review (SLR) to understand students’ affective response to active learning practices, and it focuses on the development and testing of a coding form for analyzing the literature. Specifically, the full paper seeks to answer: (1) what affective responses do instructors measure, (2) what evidence is used to study those responses, and (3) how are course features connected with student response. We conducted database searches with carefully-defined search queries which resulted in 2,365 abstracts from 1990 to 2015. Each abstract was screened by two researchers based on meeting inclusion criteria, with an adjudication round in the case of disagreement. We used RefWorks, an online citation management program, to track abstracts during this process. We identified over 480 abstracts which satisfied our criteria. Following abstract screening, we developed and tested a manuscript coding guide to capture the salient characteristics of each paper. We created an initial coding form by determining what paper topics would address our research questions and reviewing the literature to determine the most frequent response categories. We then piloted and tested the reliability of the form over three rounds of independent pair-coding, with each round resultingmore »in clarifications to the form and mutual agreement on terms’ meanings. This process of developing a manuscript coding guide demonstrates how to use free online tools, such as Google Forms and Google Sheets, to inexpensively manage a large SLR team with significant turnover. Currently, we are in the process of applying the coding guide to the full texts. When complete, the resulting data will be synthesized by creating and testing relationships between variables, using each primary source as a case study to support or refute the hypothesized relationship.« less
  2. Engineers must understand how to build, apply, and adapt various types of models in order to be successful. Throughout undergraduate engineering education, modeling is fundamental for many core concepts, though it is rarely explicitly taught. There are many benefits to explicitly teaching modeling, particularly in the first years of an engineering program. The research questions that drove this study are: (1) How do students’ solutions to a complex, open-ended problem (both written and coded solutions) develop over the course of multiple submissions? and (2) How do these developments compare across groups of students that did and did not participate in a course centered around modeling?. Students’ solutions to an open-ended problem across multiple sections of an introductory programming course were explored. These sections were all divided across two groups: (1) experimental group - these sections discussed and utilized mathematical and computational models explicitly throughout the course, and (2) comparison group - these sections focused on developing algorithms and writing code with a more traditional approach. All sections required students to complete a common open-ended problem that consisted of two versions of the problem (the first version with smaller data set and the other a larger data set). Each version hadmore »two submissions – (1) a mathematical model or algorithm (i.e. students’ written solution potentially with tables and figures) and (2) a computational model or program (i.e. students’ MATLAB code). The students’ solutions were graded by student graders after completing two required training sessions that consisted of assessing multiple sample student solutions using the rubrics to ensure consistency across grading. The resulting assessments of students’ works based on the rubrics were analyzed to identify patterns students’ submissions and comparisons across sections. The results identified differences existing in the mathematical and computational model development between students from the experimental and comparison groups. The students in the experimental group were able to better address the complexity of the problem. Most groups demonstrated similar levels and types of change across the submissions for the other dimensions related to the purpose of model components, addressing the users’ anticipated needs, and communicating their solutions. These findings help inform other researchers and instructors how to help students develop mathematical and computational modeling skills, especially in a programming course. This work is part of a larger NSF study about the impact of varying levels of modeling interventions related to different types of models on students’ awareness of different types of models and their applications, as well as their ability to apply and develop different types of models.« less
  3. Engineers must understand how to build, apply, and adapt various types of models in order to be successful. Throughout undergraduate engineering education, modeling is fundamental for many core concepts, though it is rarely explicitly taught. There are many benefits to explicitly teaching modeling, particularly in the first years of an engineering program. The research questions that drove this study are: (1) How do students’ solutions to a complex, open-ended problem (both written and coded solutions) develop over the course of multiple submissions? and (2) How do these developments compare across groups of students that did and did not participate in a course centered around modeling?. Students’ solutions to an open-ended problem across multiple sections of an introductory programming course were explored. These sections were all divided across two groups: (1) experimental group - these sections discussed and utilized mathematical and computational models explicitly throughout the course, and (2) comparison group - these sections focused on developing algorithms and writing code with a more traditional approach. All sections required students to complete a common open-ended problem that consisted of two versions of the problem (the first version with smaller data set and the other a larger data set). Each version hadmore »two submissions – (1) a mathematical model or algorithm (i.e. students’ written solution potentially with tables and figures) and (2) a computational model or program (i.e. students’ MATLAB code). The students’ solutions were graded by student graders after completing two required training sessions that consisted of assessing multiple sample student solutions using the rubrics to ensure consistency across grading. The resulting assessments of students’ works based on the rubrics were analyzed to identify patterns students’ submissions and comparisons across sections. The results identified differences existing in the mathematical and computational model development between students from the experimental and comparison groups. The students in the experimental group were able to better address the complexity of the problem. Most groups demonstrated similar levels and types of change across the submissions for the other dimensions related to the purpose of model components, addressing the users’ anticipated needs, and communicating their solutions. These findings help inform other researchers and instructors how to help students develop mathematical and computational modeling skills, especially in a programming course. This work is part of a larger NSF study about the impact of varying levels of modeling interventions related to different types of models on students’ awareness of different types of models and their applications, as well as their ability to apply and develop different types of models.« less
  4. This Work-in-Progress paper investigates how students participating in a chemical engineering (ChE) Research Experience for Undergraduates (REU) program conceptualize and make plans for research projects. The National Science Foundation has invested substantial financial resources in REU programs, which allow undergraduate students the opportunity to work with faculty in their labs and to conduct hands-on experiments. Prior research has shown that REU programs have an impact on students’ perceptions of their research skills, often measured through the Undergraduate Research Student Self-Assessment (URSSA) survey. However, few evaluation and research studies have gone beyond perception data to include direct measures of students’ gains from program participation. This work-in-progress describes efforts to evaluate the impact of an REU on students’ conceptualization and planning of research studies using a pre-post semi-structured interview process. The construct being investigated for this study is planning, which has been espoused as a critical step in the self-regulated learning (SRL) process (Winne & Perry, 2000; Zimmerman, 2008). Students who effectively self-regulate demonstrate higher levels of achievement and comprehension (Dignath & Büttner, 2008), and (arguably) work efficiency. Planning is also a critical step in large projects, such as research (Dvir & Lechler, 2004). Those who effectively plan their projects make consistentmore »progress and are more likely to achieve project success (Dvir, Raz, & Shenhar, 2003). Prior REU research has been important in demonstrating some positive impacts of REU programs, but it is time to dig deeper into the potential benefits to REU participation. Many REU students are included in weekly lab meetings, and thus potentially take part in the planning process for research projects. Thus, the research question explored here is: How do REU participants conceptualize and make plans for research projects? The study was conducted in the ChE REU program at a large, mid-Atlantic research-oriented university during the summer of 2018. Sixteen students in the program participated in the study, which entailed them completing a planning task followed by a semi-structured interview at the start and the end of the REU program. During each session, participants read a case statement that asked them to outline a plan in writing for a research project from beginning to end. Using semi-structured interview procedures, their written outlines were then verbally described. The verbalizations were recorded and transcribed. Two members of the research team are currently analyzing the responses using an open coding process to gain familiarity with the transcripts. The data will be recoded based on the initial open coding and in line with a self-regulatory and project-management framework. Results: Coding is underway, preliminary results will be ready by the draft submission deadline. The methods employed in this study might prove fruitful in understanding the direct impact on students’ knowledge, rather than relying on their perceptions of gains. Future research could investigate differences in students’ research plans based on prior research experience, research intensity of students’ home institutions, and how their plans may be impacted by training.« less
  5. The purpose of this study is to develop an instrument to measure student perceptions about the learning experiences in their online undergraduate engineering courses. Online education continues to grow broadly in higher education, but the movement toward acceptance and comprehensive utilization of online learning has generally been slower in engineering. Recently, however, there have been indicators that this could be changing. For example, ABET has accredited online undergraduate engineering degrees at Stony Brook University and Arizona State University (ASU), and an increasing number of other undergraduate engineering programs also offer online courses. During this period of transition in engineering education, further investigation about the online modality in the context of engineering education is needed, and survey instrumentation can support such investigations. The instrument presented in this paper is grounded in a Model for Online Course-level Persistence in Engineering (MOCPE), which was developed by our research team by combining two motivational frameworks used to study student persistence: the Expectancy x Value Theory of Achievement Motivation (EVT), and the ARCS model of motivational design. The initial MOCPE instrument contained 79 items related to students’ perceptions about the characteristics of their courses (i.e., the online learning management system, instructor practices, and peer support),more »expectancies of course success, course task values, perceived course difficulties, and intention to persist in the course. Evidence of validity and reliability was collected using a three-step process. First, we tested face and content validity of the instrument with experts in online engineering education and online undergraduate engineering students. Next, the survey was administered to the online undergraduate engineering student population at a large, Southwestern public university, and an exploratory factor analysis (EFA) was conducted on the responses. Lastly, evidence of reliability was obtained by computing the internal consistency of each resulting scale. The final instrument has seven scales with 67 items across 10 factors. The Cronbach alpha values for these scales range from 0.85 to 0.97. The full paper will provide complete details about the development and psychometric evaluation of the instrument, including evidence of and reliability. The instrument described in this paper will ultimately be used as part of a larger, National Science Foundation-funded project investigating the factors influencing online undergraduate engineering student persistence. It is currently being used in the context of this project to conduct a longitudinal study intended to understand the relationships between the experiences of online undergraduate engineering students in their courses and their intentions to persist in the course. We anticipate that the instrument will be of interest and use to other engineering education researchers who are also interested in studying the population of online students.« less