skip to main content


Title: Exploring the Validity of the Engineering Design Self-Efficacy Scale for Secondary School Students (Research To Practice)
The purpose of this study is to re-examine the validity evidence of the engineering design self-efficacy (EDSE) scale scores by Carberry et al. (2010) within the context of secondary education. Self-efficacy refers to individuals’ belief in their capabilities to perform a domain-specific task. In engineering education, significant efforts have been made to understand the role of self-efficacy for students considering its positive impact on student outcomes such as performance and persistence. These studies have investigated and developed measures for different domains of engineering self-efficacy (e.g., general academic, domain-general, and task-specific self-efficacy). The EDSE scale is a frequently cited measure that examines task-specific self-efficacy within the domain of engineering design. The original scale contains nine items that are intended to represent the engineering design process. Initial score validity evidence was collected using a sample consisting of 202 respondents with varying degrees of engineering experience including undergraduate/graduate students and faculty members. This scale has been primarily used by researchers and practitioners with engineering undergraduate students to assess changes in their engineering design self-efficacy as a result of active learning interventions, such as project-based learning. Our work has begun to experiment using the scale in a secondary education context in conjunction with an increased introduction to engineering in K-12 education. Yet, there still is a need to examine score validity and reliability of this scale in non-undergraduate populations such as secondary school student populations. This study fills this important gap by testing construct validity of the original nine items of the EDSE scale, supporting proper use of the scale for researchers and practitioners. This study was conducted as part of a larger, e4usa project investigating the development and implementation of a yearlong project-based engineering design course for secondary school students. Evidence of construct validity and reliability was collected using a multi-step process. First, a survey that includes the EDSE scale was administered to the project participating students at nine associated secondary schools across the US at the beginning of Spring 2020. Analysis of collected data is in progress and includes Exploratory Factor Analysis (EFA) on the 137 responses. The evidence of score reliability will be obtained by computing the internal consistency of each resulting factor. The resulting factor structure and items will be analyzed by comparing it with the original EDSE scale. The full paper will provide details about the psychometric evaluation of the EDSE scale. The findings from this paper will provide insights on the future usage of the EDSE scale in the context of secondary engineering education.  more » « less
Award ID(s):
1849430
NSF-PAR ID:
10294481
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2021 ASEE Virtual Annual Conference Content Access, Virtual Conference
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The purpose of this study is to develop an instrument to measure student perceptions about the learning experiences in their online undergraduate engineering courses. Online education continues to grow broadly in higher education, but the movement toward acceptance and comprehensive utilization of online learning has generally been slower in engineering. Recently, however, there have been indicators that this could be changing. For example, ABET has accredited online undergraduate engineering degrees at Stony Brook University and Arizona State University (ASU), and an increasing number of other undergraduate engineering programs also offer online courses. During this period of transition in engineering education, further investigation about the online modality in the context of engineering education is needed, and survey instrumentation can support such investigations. The instrument presented in this paper is grounded in a Model for Online Course-level Persistence in Engineering (MOCPE), which was developed by our research team by combining two motivational frameworks used to study student persistence: the Expectancy x Value Theory of Achievement Motivation (EVT), and the ARCS model of motivational design. The initial MOCPE instrument contained 79 items related to students’ perceptions about the characteristics of their courses (i.e., the online learning management system, instructor practices, and peer support), expectancies of course success, course task values, perceived course difficulties, and intention to persist in the course. Evidence of validity and reliability was collected using a three-step process. First, we tested face and content validity of the instrument with experts in online engineering education and online undergraduate engineering students. Next, the survey was administered to the online undergraduate engineering student population at a large, Southwestern public university, and an exploratory factor analysis (EFA) was conducted on the responses. Lastly, evidence of reliability was obtained by computing the internal consistency of each resulting scale. The final instrument has seven scales with 67 items across 10 factors. The Cronbach alpha values for these scales range from 0.85 to 0.97. The full paper will provide complete details about the development and psychometric evaluation of the instrument, including evidence of and reliability. The instrument described in this paper will ultimately be used as part of a larger, National Science Foundation-funded project investigating the factors influencing online undergraduate engineering student persistence. It is currently being used in the context of this project to conduct a longitudinal study intended to understand the relationships between the experiences of online undergraduate engineering students in their courses and their intentions to persist in the course. We anticipate that the instrument will be of interest and use to other engineering education researchers who are also interested in studying the population of online students. 
    more » « less
  2. This evidence-based practices paper discusses the method employed in validating the use of a project modified version of the PROCESS tool (Grigg, Van Dyken, Benson, & Morkos, 2013) for measuring student problem solving skills. The PROCESS tool allows raters to score students’ ability in the domains of Problem definition, Representing the problem, Organizing information, Calculations, Evaluating the solution, Solution communication, and Self-assessment. Specifically, this research compares student performance on solving traditional textbook problems with novel, student-generated learning activities (i.e. reverse engineering videos in order to then create their own homework problem and solution). The use of student-generated learning activities to assess student problem solving skills has theoretical underpinning in Felder’s (1987) work of “creating creative engineers,” as well as the need to develop students’ abilities to transfer learning and solve problems in a variety of real world settings. In this study, four raters used the PROCESS tool to score the performance of 70 students randomly selected from two undergraduate chemical engineering cohorts at two Midwest universities. Students from both cohorts solved 12 traditional textbook style problems and students from the second cohort solved an additional nine student-generated video problems. Any large scale assessment where multiple raters use a rating tool requires the investigation of several aspects of validity. The many-facets Rasch measurement model (MFRM; Linacre, 1989) has the psychometric properties to determine if there are any characteristics other than “student problem solving skills” that influence the scores assigned, such as rater bias, problem difficulty, or student demographics. Before implementing the full rating plan, MFRM was used to examine how raters interacted with the six items on the modified PROCESS tool to score a random selection of 20 students’ performance in solving one problem. An external evaluator led “inter-rater reliability” meetings where raters deliberated rationale for their ratings and differences were resolved by recourse to Pretz, et al.’s (2003) problem-solving cycle that informed the development of the PROCESS tool. To test the new understandings of the PROCESS tool, raters were assigned to score one new problem from a different randomly selected group of six students. Those results were then analyzed in the same manner as before. This iterative process resulted in substantial increases in reliability, which can be attributed to increased confidence that raters were operating with common definitions of the items on the PROCESS tool and rating with consistent and comparable severity. This presentation will include examples of the student-generated problems and a discussion of common discrepancies and solutions to the raters’ initial use of the PROCESS tool. Findings as well as the adapted PROCESS tool used in this study can be useful to engineering educators and engineering education researchers. 
    more » « less
  3. Chemistry education research has increasingly considered the role of affect when investigating chemistry learning environments over the past decade. Despite its popularity in educational spheres, mindset has been understudied from a chemistry-specific perspective. Mindset encompasses one's beliefs about the ability to change intelligence with effort and has been shown to be a domain-specific construct. For this reason, students’ mindset would be most relevant in chemistry if it were measured as a chemistry-specific construct. To date, no instrument has been developed for use in chemistry learning contexts. Here we present evidence supporting the development process and final product of a mindset instrument designed specifically for undergraduate chemistry students. The Chemistry Mindset Instrument (CheMI) was developed through an iterative design process requiring multiple implementations and revisions. We analyze the psychometric properties of CheMI data from a sample of introductory (general and organic) chemistry students enrolled in lecture courses. We achieved good data-model fit via confirmatory factor analysis and high reliability for the newly developed items, indicating that the instrument functions well with the target population. Significant correlations were observed for chemistry mindset with students’ self-efficacy, mastery goals, and course performance, providing external validity evidence for the construct measurement. 
    more » « less
  4. Drawing, as a skill, is closely tied to many creative fields and it is a unique practice for every individual. Drawing has been shown to improve cognitive and communicative abilities, such as visual communication, problem-solving skills, students’ academic achievement, awareness of and attention to surrounding details, and sharpened analytical skills. Drawing also stimulates both sides of the brain and improves peripheral skills of writing, 3-D spatial recognition, critical thinking, and brainstorming. People are often exposed to drawing as children, drawing their families, their houses, animals, and, most notably, their imaginative ideas. These skills develop over time naturally to some extent, however, while the base concept of drawing is a basic skill, the mastery of this skill requires extensive practice and it can often be significantly impacted by the self-efficacy of an individual. Sketchtivity is an AI tool developed by Texas A&M University to facilitate the growth of drawing skills and track their performance. Sketching skill development depends in part on students’ self-efficacy associated with their drawing abilities. Gauging the drawing self-efficacy of individuals is critical in understanding the impact that this drawing practice has had with this new novel instrument, especially in contrast to traditional practicing methods. It may also be very useful for other researchers, educators, and technologists. This study reports the development and initial validation of a new 13-item measure that assesses perceived drawing self efficacy. The13 items to measure drawing self efficacy were developed based on Bandura’s guide for constructing Self-Efficacy Scales. The participants in the study consisted of 222 high school students from engineering, art, and pre-calculus classes. Internal consistency of the 13 observed items were found to be very high (Cronbach alpha: 0.943), indicating a high reliability of the scale. Exploratory Factor Analysis was performed to further investigate the variance among the 13 observed items, to find the underlying latent factors that influenced the observed items, and to see if the items needed revision. We found that a three model was the best fit for our data, given fit statistics and model interpretability. The factors are: Factor 1: Self-efficacy with respect to drawing specific objects; Factor 2: Self-efficacy with respect to drawing practically to solve problems, communicating with others, and brainstorming ideas; Factor 3: Self-efficacy with respect to drawing to create, express ideas, and use one’s imagination. An alternative four-factor model is also discussed. The purpose of our study is to inform interventions that increase self-efficacy. We believe that this assessment will be valuable especially for education researchers who implement AI-based tools to measure drawing skills.This initial validity study shows promising results for a new measure of drawing self-efficacy. Further validation with new populations and drawing classes is needed to support its use, and further psychometric testing of item-level performance. In the future, this self-efficacy assessment could be used by teachers and researchers to guide instructional interventions meant to increase drawing self-efficacy. 
    more » « less
  5. As more institutions create first year engineering programs that teach an engineering design process, there is a growing desire to prepare students for this coursework in the high school setting. When exposing such a broad population to these ideas, a primary question arises regarding student attitudes toward engineering and how these attitudes develop over time. That is, how does this exposure to engineering design influence student attitudes toward engineering? Moreover, answering this question will allow educators to better understand what motivates students to learn, how much their motivation impacts their overall mastery of these skills, and how these aspects of engineering self-efficacy and engineering design may differ between those who are on a pre-engineering track and those who are not. To begin answering this question, high school students enrolled in the Olathe City school system of Olathe, Kansas completed Engineering Problem-Framing Design Activities (EPDAs) in participating science courses (AP physics, physics, advanced biotechnology, chemistry, honors chemistry, biology, honors biology, and physical science specifically) of the traditional science and engineering academy curriculums offered by the district. Student engineering self-efficacy and motivation was also measured at the beginning and end of their coursework. This was conducted via a new instrument, the Engineering Design Value-Expectancy Scale (EDVES), which includes 38 items across three primary subscales: expectancy of success in, perceived value of, and identification with engineering and design. The development of this tool was presented and discussed in a previous study where the EDVES instrument was analyzed for validity among first-year undergraduate engineering students. In this work, the responses of high school students on the EDVES were analyzed to establish validity in this new population and to begin exploring trends in student responses based on their sub-population. Validity testing was completed via Cook’s validation evidence model with respect to scoring, generalization, and extrapolation evidence. The pre-course EDVES responses obtained were used to complete validation and trend analysis (note that post-course data was not readily available at the time of analysis). 
    more » « less