This work in progress research paper considers the question, what kind of problems do engineering students commonly solve during their education? Engineering problems have been generally classified as ill-structured/open-ended or well-structured/closed-ended. Various authors have identified the characteristics of ill-structured problems or presented typologies of problems. Simple definitions state that well-structured problems are simple, concrete, and have a single solution, while ill-structured problems are complex, abstract, and have multiple possible solutions (Jonassen, 1997, 2000). More detailed classifications have been provided by Shin, Jonassen, and McGee (2003), Voss (2006), and Johnstone (2001). It is commonly understood that classroom problems are well-structured while workplace problems are ill-structured, but we cannot find any empirical data to confirm or deny this proposition. Engineers commonly encounter ill-structured problems such as design problems in the field therefore problem-solving skills are invaluable and should be taught in engineering courses. This research specifically looks at the types of problems present in the two most commonly used statics textbooks (Hibbeler, 2016; Beer, et al., 2019). All end-of-chapter problems in these textbooks were classified using Jonassen’s (2000) well-known typology of problem types. Out of 3,387 problems between both books, 99% fell into the algorithmic category and the remaining fell into the logic category. These preliminary results provide an understanding of the types of problems engineering students most commonly encounter in their classes. Prior research has documented that textbook example problems exert a strong influence on students' problem-solving strategies (Lee et al., 2013). If instructors only assign textbook problems, students in statics courses do not see any ill-structured problems at that stage in their education. We argue that even in foundational courses such as statics, students should be exposed to ill-structured problems. By providing opportunities for students to solve more ill-structured problems, students can become more familiar with them and become better prepared for the workforce. Moving forward, textbooks from several other courses will be analyzed to determine the difference between a fundamental engineering course such as statics and upper-level courses. This research will allow us to determine how the problem types differ between entry level and advanced classes and reveal if engineering textbooks primarily contain well-structured problems. Keywords: problem solving, textbooks, ill-structured problems
more »
« less
Enriching Students’ Combinatorial Reasoning through the Use of Loops and Conditional Statements in Python
When solving counting problems, students often struggle with determining what they are trying to count (and thus what problem type they are trying to solve and, ultimately, what formula appropriately applies). There is a need to explore potential interventions to deepen students’ understanding of key distinctions between problem types and to differentiate meaningfully between such problems. In this paper, we investigate undergraduate students’ understanding of sets of outcomes in the context of elementary Python computer programming. We show that four straightforward program conditional statements seemed to reinforce important conceptual understandings of four canonical combinatorial problem types. We also suggest that the findings in this paper represent one example of a way in which a computational setting may facilitate mathematical learning.
more »
« less
- Award ID(s):
- 1650943
- PAR ID:
- 10148794
- Date Published:
- Journal Name:
- International Journal of Research in Undergraduate Mathematics Education
- ISSN:
- 2198-9745
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)This is a Complete Research paper. Understanding models is important for engineering students, but not often taught explicitly in first-year courses. Although there are many types of models in engineering, studies have shown that engineering students most commonly identify prototyping or physical models when asked about modeling. In order to evaluate students’ understanding of different types of models used in engineering and the effectiveness of interventions designed to teach modeling, a survey was developed. This paper describes development of a framework to categorize the types of engineering models that first-year engineering students discuss based on both previous literature and students’ responses to survey questions about models. In Fall 2019, the survey was administered to first-year engineering students to investigate their awareness of types of models and understanding of how to apply different types of models in solving engineering problems. Students’ responses to three questions from the survey were analyzed in this study: 1. What is a model in science, technology, engineering, and mathematics (STEM) fields?, 2. List different types of models that you can think of., and 3. Describe each different type of model you listed. Responses were categorized by model type and the framework was updated through an iterative coding process. After four rounds of analysis of 30 different students’ responses, an acceptable percentage agreement was reached between independent researchers coding the data. Resulting frequencies of the various model types identified by students are presented along with representative student responses to provide insight into students’ understanding of models in STEM. This study is part of a larger project to understand the impact of modeling interventions on students’ awareness of models and their ability to build and apply models.more » « less
-
null (Ed.)Understanding models is important for engineering students, but not often taught explicitly in first-year courses. Although there are many types of models in engineering, studies have shown that engineering students most commonly identify prototyping or physical models when asked about modeling. In order to evaluate students’ understanding of different types of models used in engineering and the effectiveness of interventions designed to teach modeling, a survey was developed. This paper describes development of a framework to categorize the types of engineering models that first-year engineering students discuss based on both previous literature and students’ responses to survey questions about models. In Fall 2019, the survey was administered to first-year engineering students to investigate their awareness of types of models and understanding of how to apply different types of models in solving engineering problems. Students’ responses to three questions from the survey were analyzed in this study: 1. What is a model in science, technology, engineering, and mathematics (STEM) fields?, 2. List different types of models that you can think of., and 3. Describe each different type of model you listed. Responses were categorized by model type and the framework was updated through an iterative coding process. After four rounds of analysis of 30 different students’ responses, an acceptable percentage agreement was reached between independent researchers coding the data. Resulting frequencies of the various model types identified by students are presented along with representative student responses to provide insight into students’ understanding of models in STEM. This study is part of a larger project to understand the impact of modeling interventions on students’ awareness of models and their ability to build and apply models.more » « less
-
Problem solving is an essential part of engineering. Research shows that students are not exposed to ill-structured problems in the engineering classrooms as much as well-structured problems and do not feel as confident and comfortable solving them. There have been several studies on how engineering students solve and perceive ill-structured problems, however, understanding engineering faculty’s perceptions of teaching and solving such problems is important as well. Since it is the engineering faculty who teach students how to approach engineering problems, it is essential to understand how they perceive solving and teaching of these problems. The following research question has guided this research: What beliefs do engineering faculty have about teaching and solving ill-structured problems? Ten tenure-track or tenured faculty in civil engineering from various universities across the U.S. were interviewed after solving an ill-structured engineering problem. Their responses were transcribed and coded. The findings suggest that faculty generally preferred to teach both well-structured and ill-structured problems in their courses. They also acknowledge the advantages of ill-structured problems, in that they promote critical thinking, require creativity, and are more challenging. However, the results showed that some are less likely to use ill-structured problems in their teaching compared to well-structured problems. We also found that faculty became more comfortable teaching ill-structured problems as they gain more experience in teaching these types of problems. Faculty’s responses showed that while they solve ill-structured problems as part of their research on a regular basis, some faculty do not integrate these problems in the classes that they teach. These results indicate that although faculty recognize the importance of using ill-structured problems while teaching, the lack of experience with teaching these problems, other faculty responsibilities, and the complex nature of these problems make it challenging for engineering faculty to incorporate these problems into the engineering classroom. Based on these findings, in order to improve faculty’s comfort and willingness to use ill-structured problems in their teaching, recommendations for faculty are provided in the paper.more » « less
-
Computer-based testing is a powerful tool for scaling exams in large lecture classes. The decision to adopt computer-based testing is typically framed as a tradeoff in terms of time; time saved by auto-grading is reallocated as time spent developing problem pools, but with significant time savings. This paper seeks to examine the tradeoff in terms of accuracy in measuring student understanding. While some exams (e.g., multiple choice) are readily portable to a computer-based format, adequately porting other exam types (e.g., drawings like FBDs or worked problems) can be challenging. A key component of this challenge is to ask “What is the exam actually able to measure?” In this paper the authors will provide a quantitative and qualitative analysis of student understanding measurements via computer-based testing in a sophomore level Solid Mechanics course. At Michigan State University, Solid Mechanics is taught using the SMART methodology. SMART stands for Supported Mastery Assessment through Repeated Testing. In a typical semester, students are given 5 exams that test their understanding of the material. Each exam is graded using the SMART rubric which awards full points for the correct answer, some percentage for non-conceptual errors, and zero points for a solution that has a conceptual error. Every exam is divided into four sections; concept, simple, average, and challenge. Each exam has at least one retake opportunity, for a total of 10 written tests. In the current study, students representing 10% of the class took half of each exam in Prairie Learn, a computer-based auto-grading platform. During this exam, students were given instant feedback on submitted answers (correct or incorrect) and given an opportunity to identify their mistakes and resubmit their work. Students were provided with scratch paper to set up the problem and work out solutions. After the exam, the paper-based work was compared with the computer submitted answers. This paper examines what types of mistakes (conceptual and non-conceptual) students were able to correct when feedback was provided. The answer is dependent on the type and difficulty of the problem. The analysis also examines whether students taking the computer-based test performed at the same level as their peers who took the paper-based exams. Additionally, student feedback is provided and discussed.more » « less
An official website of the United States government

