skip to main content

Title: Exploring the Validity of the Engineering Design Self-Efficacy Scale for Secondary School Students (Research To Practice)
The purpose of this study is to re-examine the validity evidence of the engineering design self-efficacy (EDSE) scale scores by Carberry et al. (2010) within the context of secondary education. Self-efficacy refers to individuals’ belief in their capabilities to perform a domain-specific task. In engineering education, significant efforts have been made to understand the role of self-efficacy for students considering its positive impact on student outcomes such as performance and persistence. These studies have investigated and developed measures for different domains of engineering self-efficacy (e.g., general academic, domain-general, and task-specific self-efficacy). The EDSE scale is a frequently cited measure that examines task-specific self-efficacy within the domain of engineering design. The original scale contains nine items that are intended to represent the engineering design process. Initial score validity evidence was collected using a sample consisting of 202 respondents with varying degrees of engineering experience including undergraduate/graduate students and faculty members. This scale has been primarily used by researchers and practitioners with engineering undergraduate students to assess changes in their engineering design self-efficacy as a result of active learning interventions, such as project-based learning. Our work has begun to experiment using the scale in a secondary education context in conjunction with an more » increased introduction to engineering in K-12 education. Yet, there still is a need to examine score validity and reliability of this scale in non-undergraduate populations such as secondary school student populations. This study fills this important gap by testing construct validity of the original nine items of the EDSE scale, supporting proper use of the scale for researchers and practitioners. This study was conducted as part of a larger, e4usa project investigating the development and implementation of a yearlong project-based engineering design course for secondary school students. Evidence of construct validity and reliability was collected using a multi-step process. First, a survey that includes the EDSE scale was administered to the project participating students at nine associated secondary schools across the US at the beginning of Spring 2020. Analysis of collected data is in progress and includes Exploratory Factor Analysis (EFA) on the 137 responses. The evidence of score reliability will be obtained by computing the internal consistency of each resulting factor. The resulting factor structure and items will be analyzed by comparing it with the original EDSE scale. The full paper will provide details about the psychometric evaluation of the EDSE scale. The findings from this paper will provide insights on the future usage of the EDSE scale in the context of secondary engineering education. « less
Authors:
; ; ;
Award ID(s):
1849430
Publication Date:
NSF-PAR ID:
10294481
Journal Name:
2021 ASEE Virtual Annual Conference Content Access, Virtual Conference
Sponsoring Org:
National Science Foundation
More Like this
  1. The purpose of this study is to develop an instrument to measure student perceptions about the learning experiences in their online undergraduate engineering courses. Online education continues to grow broadly in higher education, but the movement toward acceptance and comprehensive utilization of online learning has generally been slower in engineering. Recently, however, there have been indicators that this could be changing. For example, ABET has accredited online undergraduate engineering degrees at Stony Brook University and Arizona State University (ASU), and an increasing number of other undergraduate engineering programs also offer online courses. During this period of transition in engineering education, further investigation about the online modality in the context of engineering education is needed, and survey instrumentation can support such investigations. The instrument presented in this paper is grounded in a Model for Online Course-level Persistence in Engineering (MOCPE), which was developed by our research team by combining two motivational frameworks used to study student persistence: the Expectancy x Value Theory of Achievement Motivation (EVT), and the ARCS model of motivational design. The initial MOCPE instrument contained 79 items related to students’ perceptions about the characteristics of their courses (i.e., the online learning management system, instructor practices, and peer support),more »expectancies of course success, course task values, perceived course difficulties, and intention to persist in the course. Evidence of validity and reliability was collected using a three-step process. First, we tested face and content validity of the instrument with experts in online engineering education and online undergraduate engineering students. Next, the survey was administered to the online undergraduate engineering student population at a large, Southwestern public university, and an exploratory factor analysis (EFA) was conducted on the responses. Lastly, evidence of reliability was obtained by computing the internal consistency of each resulting scale. The final instrument has seven scales with 67 items across 10 factors. The Cronbach alpha values for these scales range from 0.85 to 0.97. The full paper will provide complete details about the development and psychometric evaluation of the instrument, including evidence of and reliability. The instrument described in this paper will ultimately be used as part of a larger, National Science Foundation-funded project investigating the factors influencing online undergraduate engineering student persistence. It is currently being used in the context of this project to conduct a longitudinal study intended to understand the relationships between the experiences of online undergraduate engineering students in their courses and their intentions to persist in the course. We anticipate that the instrument will be of interest and use to other engineering education researchers who are also interested in studying the population of online students.« less
  2. This evidence-based practices paper discusses the method employed in validating the use of a project modified version of the PROCESS tool (Grigg, Van Dyken, Benson, & Morkos, 2013) for measuring student problem solving skills. The PROCESS tool allows raters to score students’ ability in the domains of Problem definition, Representing the problem, Organizing information, Calculations, Evaluating the solution, Solution communication, and Self-assessment. Specifically, this research compares student performance on solving traditional textbook problems with novel, student-generated learning activities (i.e. reverse engineering videos in order to then create their own homework problem and solution). The use of student-generated learning activities to assess student problem solving skills has theoretical underpinning in Felder’s (1987) work of “creating creative engineers,” as well as the need to develop students’ abilities to transfer learning and solve problems in a variety of real world settings. In this study, four raters used the PROCESS tool to score the performance of 70 students randomly selected from two undergraduate chemical engineering cohorts at two Midwest universities. Students from both cohorts solved 12 traditional textbook style problems and students from the second cohort solved an additional nine student-generated video problems. Any large scale assessment where multiple raters use a rating toolmore »requires the investigation of several aspects of validity. The many-facets Rasch measurement model (MFRM; Linacre, 1989) has the psychometric properties to determine if there are any characteristics other than “student problem solving skills” that influence the scores assigned, such as rater bias, problem difficulty, or student demographics. Before implementing the full rating plan, MFRM was used to examine how raters interacted with the six items on the modified PROCESS tool to score a random selection of 20 students’ performance in solving one problem. An external evaluator led “inter-rater reliability” meetings where raters deliberated rationale for their ratings and differences were resolved by recourse to Pretz, et al.’s (2003) problem-solving cycle that informed the development of the PROCESS tool. To test the new understandings of the PROCESS tool, raters were assigned to score one new problem from a different randomly selected group of six students. Those results were then analyzed in the same manner as before. This iterative process resulted in substantial increases in reliability, which can be attributed to increased confidence that raters were operating with common definitions of the items on the PROCESS tool and rating with consistent and comparable severity. This presentation will include examples of the student-generated problems and a discussion of common discrepancies and solutions to the raters’ initial use of the PROCESS tool. Findings as well as the adapted PROCESS tool used in this study can be useful to engineering educators and engineering education researchers.« less
  3. Chemistry education research has increasingly considered the role of affect when investigating chemistry learning environments over the past decade. Despite its popularity in educational spheres, mindset has been understudied from a chemistry-specific perspective. Mindset encompasses one's beliefs about the ability to change intelligence with effort and has been shown to be a domain-specific construct. For this reason, students’ mindset would be most relevant in chemistry if it were measured as a chemistry-specific construct. To date, no instrument has been developed for use in chemistry learning contexts. Here we present evidence supporting the development process and final product of a mindset instrument designed specifically for undergraduate chemistry students. The Chemistry Mindset Instrument (CheMI) was developed through an iterative design process requiring multiple implementations and revisions. We analyze the psychometric properties of CheMI data from a sample of introductory (general and organic) chemistry students enrolled in lecture courses. We achieved good data-model fit via confirmatory factor analysis and high reliability for the newly developed items, indicating that the instrument functions well with the target population. Significant correlations were observed for chemistry mindset with students’ self-efficacy, mastery goals, and course performance, providing external validity evidence for the construct measurement.
  4. Drawing, as a skill, is closely tied to many creative fields and it is a unique practice for every individual. Drawing has been shown to improve cognitive and communicative abilities, such as visual communication, problem-solving skills, students’ academic achievement, awareness of and attention to surrounding details, and sharpened analytical skills. Drawing also stimulates both sides of the brain and improves peripheral skills of writing, 3-D spatial recognition, critical thinking, and brainstorming. People are often exposed to drawing as children, drawing their families, their houses, animals, and, most notably, their imaginative ideas. These skills develop over time naturally to some extent, however, while the base concept of drawing is a basic skill, the mastery of this skill requires extensive practice and it can often be significantly impacted by the self-efficacy of an individual. Sketchtivity is an AI tool developed by Texas A&M University to facilitate the growth of drawing skills and track their performance. Sketching skill development depends in part on students’ self-efficacy associated with their drawing abilities. Gauging the drawing self-efficacy of individuals is critical in understanding the impact that this drawing practice has had with this new novel instrument, especially in contrast to traditional practicing methods. It may alsomore »be very useful for other researchers, educators, and technologists. This study reports the development and initial validation of a new 13-item measure that assesses perceived drawing self efficacy. The13 items to measure drawing self efficacy were developed based on Bandura’s guide for constructing Self-Efficacy Scales. The participants in the study consisted of 222 high school students from engineering, art, and pre-calculus classes. Internal consistency of the 13 observed items were found to be very high (Cronbach alpha: 0.943), indicating a high reliability of the scale. Exploratory Factor Analysis was performed to further investigate the variance among the 13 observed items, to find the underlying latent factors that influenced the observed items, and to see if the items needed revision. We found that a three model was the best fit for our data, given fit statistics and model interpretability. The factors are: Factor 1: Self-efficacy with respect to drawing specific objects; Factor 2: Self-efficacy with respect to drawing practically to solve problems, communicating with others, and brainstorming ideas; Factor 3: Self-efficacy with respect to drawing to create, express ideas, and use one’s imagination. An alternative four-factor model is also discussed. The purpose of our study is to inform interventions that increase self-efficacy. We believe that this assessment will be valuable especially for education researchers who implement AI-based tools to measure drawing skills.This initial validity study shows promising results for a new measure of drawing self-efficacy. Further validation with new populations and drawing classes is needed to support its use, and further psychometric testing of item-level performance. In the future, this self-efficacy assessment could be used by teachers and researchers to guide instructional interventions meant to increase drawing self-efficacy.« less
  5. This research paper describes the development of an assessment instrument for use with middle school students that provides insight into students’ interpretive understanding by looking at early indicators of developing expertise in students’ responses to solution generation, reflection, and concept demonstration tasks. We begin by detailing a synthetic assessment model that served as the theoretical basis for assessing specific thinking skills. We then describe our process of developing test items by working with a Teacher Design Team (TDT) of instructors in our partner school system to set guidelines that would better orient the assessment in that context and working within the framework of standards and disciplinary core ideas enumerated in the Next Generation Science Standards (NGSS). We next specify our process of refining the assessment from 17 items across three separate item pools to a final total of three open-response items. We then provide evidence for the validity and reliability of the assessment instrument from the standards of (1) content, (2) meaningfulness, (3) generalizability, and (4) instructional sensitivity. As part of the discussion from the standards of generalizability and instructional sensitivity, we detail a study carried out in our partner school system in the fall of 2019. The instrument wasmore »administered to students in treatment (n= 201) and non- treatment (n = 246) groups, wherein the former participated in a two-to-three- week, NGSS-aligned experimental instructional unit introducing the principles of engineering design that focused on engaging students using the Imaginative Education teaching approach. The latter group were taught using the district’s existing engineering design curriculum. Results from statistical analysis of student responses showed that the interrater reliability of the scoring procedures were good-to-excellent, with intra-class correlation coefficients ranging between .72 and .95. To gauge the instructional sensitivity of the assessment instrument, a series of non-parametric comparative analyses (independent two-group Mann- Whitney tests) were carried out. These found statistically significant differences between treatment and non-treatment student responses related to the outcomes of fluency and elaboration, but not reflection.« less