Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to nonfederal websites. Their policies may differ from this site.

This Research paper discusses the opportunities that utilizing a computer program can present in analyzing large amounts of qualitative data collected through a survey tool. When working with longitudinal qualitative data, there are many challenges that researchers face. The coding scheme may evolve over time requiring recoding of early data. There may be long periods of time between data analysis. Typically, multiple researchers will participate in the coding, but this may introduce bias or inconsistencies. Ideally the same researchers would be analyzing the data, but often there is some turnover in the team, particularly when students assist with the coding. Computer programs can enable automated or semiautomated coding helping to reduce errors and inconsistencies in the coded data. In this study, a modeling survey was developed to assess student awareness of model types and administered in four firstyear engineering courses across the three universities over the span of three years. The data collected from this survey consists of over 4,000 students’ openended responses to three questions about types of models in science, technology, engineering, and mathematics (STEM) fields. A coding scheme was developed to identify and categorize model types in student responses. Over two years, two undergraduate researchers analyzed a total of 1,829 students’ survey responses after ensuring intercoder reliability was greater than 80% for each model category. However, with much data remaining to be coded, the research team developed a MATLAB program to automatically implement the coding scheme and identify the types of models students discussed in their responses. MATLAB coded results were compared to humancoded results (n = 1,829) to assess reliability; results matched between 81%99% for the different model categories. Furthermore, the reliability of the MATLAB coded results are within the range of the interrater reliability measured between the 2 undergraduate researchers (86100% for the five model categories). With good reliability of the program, all 4,358 survey responses were coded; results showing the number and types of models identified by students are presented in the paper.more » « less

null (Ed.)This is a Complete Research paper. Understanding models is important for engineering students, but not often taught explicitly in firstyear courses. Although there are many types of models in engineering, studies have shown that engineering students most commonly identify prototyping or physical models when asked about modeling. In order to evaluate students’ understanding of different types of models used in engineering and the effectiveness of interventions designed to teach modeling, a survey was developed. This paper describes development of a framework to categorize the types of engineering models that firstyear engineering students discuss based on both previous literature and students’ responses to survey questions about models. In Fall 2019, the survey was administered to firstyear engineering students to investigate their awareness of types of models and understanding of how to apply different types of models in solving engineering problems. Students’ responses to three questions from the survey were analyzed in this study: 1. What is a model in science, technology, engineering, and mathematics (STEM) fields?, 2. List different types of models that you can think of., and 3. Describe each different type of model you listed. Responses were categorized by model type and the framework was updated through an iterative coding process. After four rounds of analysis of 30 different students’ responses, an acceptable percentage agreement was reached between independent researchers coding the data. Resulting frequencies of the various model types identified by students are presented along with representative student responses to provide insight into students’ understanding of models in STEM. This study is part of a larger project to understand the impact of modeling interventions on students’ awareness of models and their ability to build and apply models.more » « less

null (Ed.)To succeed in engineering careers, students must be able to create and apply models to certain problems. The different types of modeling skills include physical, mathematical, computational, graphing, and financial. However, many students struggle to define and form relevant models in their engineering courses. We are hoping that the students are able to better define and apply models in their engineering courses after they have completed the MATLAB and/or CATIA courses. We also are hoping to see a difference in model identification between the MATLAB and CATIA courses. All students in the MATLAB and CATIA courses must be able to understand and create models in order to solve problems and think critically in engineering. Students need foundational knowledge about basic modeling skills that will be effective in their course. The goal is for students to create an approach to help them solve problems logically and apply different modeling skills.more » « less

Previous work identified an anthropogenic fingerprint pattern in 𝑇AC (𝑥, 𝑡), the amplitude of the seasonal cycle of mid to upper tropospheric temperature (TMT), but did not explicitly consider whether fingerprint identification in satellite 𝑇AC(𝑥,𝑡) data could have been influenced by realworld multidecadal internal variability (MIV). We address this question here using large ensembles (LEs) performed with five climate models. LEs provide many different sequences of internal variability noise superimposed on an underlying forced signal. Despite differences in historical external forcings, climate sensitivity, and MIV properties of the five models, their 𝑇AC (𝑥, 𝑡) fingerprints are similar and statistically identifiable in 239 of the 240 LE realizations of historical climate change. Comparing simulated and observed variability spectra reveals that consistent fingerprint identification is unlikely to be biased by model underestimates of observed MIV. Even in the presence of large (factor of 34) intermodel and interrealization differences in the amplitude of MIV, the anthropogenic fingerprints of seasonal cycle changes are robustly identifiable in models and satellite data. This is primarily due to the fact that the distinctive, globalscale fingerprint patterns are spatially dissimilar to the smallerscale patterns of internal 𝑇AC(𝑥,𝑡) variability associated with the Atlantic Multidecadal Oscillation and the El Niño~Southern Oscillation. The robustness of the seasonal cycle D&A results shown here, taken together with the evidence fromidealized aquaplanet simulations, suggest that basic physical processes are dictating a common pattern of forced𝑇AC(𝑥,𝑡) changes in observations and in the five LEs. The key processes involved include GHGinduced expansion of the tropics, lapserate changes, land surface drying, and sea ice decrease.more » « less

Engineers must understand how to build, apply, and adapt various types of models in order to be successful. Throughout undergraduate engineering education, modeling is fundamental for many core concepts, though it is rarely explicitly taught. There are many benefits to explicitly teaching modeling, particularly in the first years of an engineering program. The research questions that drove this study are: (1) How do students’ solutions to a complex, openended problem (both written and coded solutions) develop over the course of multiple submissions? and (2) How do these developments compare across groups of students that did and did not participate in a course centered around modeling?. Students’ solutions to an openended problem across multiple sections of an introductory programming course were explored. These sections were all divided across two groups: (1) experimental group  these sections discussed and utilized mathematical and computational models explicitly throughout the course, and (2) comparison group  these sections focused on developing algorithms and writing code with a more traditional approach. All sections required students to complete a common openended problem that consisted of two versions of the problem (the first version with smaller data set and the other a larger data set). Each version had two submissions – (1) a mathematical model or algorithm (i.e. students’ written solution potentially with tables and figures) and (2) a computational model or program (i.e. students’ MATLAB code). The students’ solutions were graded by student graders after completing two required training sessions that consisted of assessing multiple sample student solutions using the rubrics to ensure consistency across grading. The resulting assessments of students’ works based on the rubrics were analyzed to identify patterns students’ submissions and comparisons across sections. The results identified differences existing in the mathematical and computational model development between students from the experimental and comparison groups. The students in the experimental group were able to better address the complexity of the problem. Most groups demonstrated similar levels and types of change across the submissions for the other dimensions related to the purpose of model components, addressing the users’ anticipated needs, and communicating their solutions. These findings help inform other researchers and instructors how to help students develop mathematical and computational modeling skills, especially in a programming course. This work is part of a larger NSF study about the impact of varying levels of modeling interventions related to different types of models on students’ awareness of different types of models and their applications, as well as their ability to apply and develop different types of models.more » « less