skip to main content


Title: The Engineering Design Process Portfolio Scoring Rubric (EDPPSR): Initial Validity and Reliability
Research prior to 2005 found that no single framework existed that could capture the engineering design process fully or well and benchmark each element of the process to a commonly accepted set of referenced artifacts. Compounding the construction of a stepwise, artifact driven framework is that engineering design is typically practiced over time as a complex and iterative process. For both novice and advanced students, learning and applying the design process is often cumulative, with many informal and formal programmatic opportunities to practice essential elements. The Engineering Design Process Portfolio Scoring Rubric (EDPPSR) was designed to apply to any portfolio that is intended to document an individual or team driven process leading to an original attempt to design a product, process, or method to provide the best and most optimal solution to a genuine and meaningful problem. In essence, the portfolio should be a detailed account or “biography” of a project and the thought processes that inform that project. Besides narrative and explanatory text, entries may include (but need not be limited to) drawings, schematics, photographs, notebook and journal entries, transcripts or summaries of conversations and interviews, and audio/video recordings. Such entries are likely to be necessary in order to convey accurately and completely the complex thought processes behind the planning, implementation, and self-evaluation of the project. The rubric is comprised of four main components, each in turn comprised of three elements. Each element has its own holistic rubric. The process by which the EDPPSR was created gives evidence of the relevance and representativeness of the rubric and helps to establish validity. The EDPPSR model as originally rendered has a strong theoretical foundation as it has been developed by reference to the literature on the steps of the design process through focus groups and through expert review by teachers, faculty and researchers in performance based, portfolio rubrics and assessments. Using the unified construct validity framework, the EDDPSR’s validity was further established through expert reviewers (experts in engineering design) providing evidence supporting the content relevance and representativeness of the EDPPSR in representing the basic process of engineering design. This manuscript offers empirical evidence that supports the use of the EDPPSR model to evaluate student design-based projects in a reliable and valid manner. Intra-class correlation coefficients (ICC) were calculated to determine the inter-rater reliability (IRR) of the rubric. Given the small sample size we also examined confidence intervals (95%) to provide a range of values in which the estimate of inter-reliability is likely contained.  more » « less
Award ID(s):
2120746 1849430
NSF-PAR ID:
10345718
Author(s) / Creator(s):
Date Published:
Journal Name:
American Society for Engineering Education (ASEE) Conference & Exposition
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This project aims to enhance students’ learning in foundational engineering courses through oral exams based on the research conducted at the University of California San Diego. The adaptive dialogic nature of oral exams provides instructors an opportunity to better understand students’ thought processes, thus holding promise for improving both assessments of conceptual mastery and students’ learning attitudes and strategies. However, the issues of oral exam reliability, validity, and scalability have not been fully addressed. As with any assessment format, careful design is needed to maximize the benefits of oral exams to student learning and minimize the potential concerns. Compared to traditional written exams, oral exams have a unique design space, which involves a large range of parameters, including the type of oral assessment questions, grading criteria, how oral exams are administered, how questions are communicated and presented to the students, how feedback were provided, and other logistical perspectives such as weight of oral exam in overall course grade, frequency of oral assessment, etc. In order to address the scalability for high enrollment classes, key elements of the project are the involvement of the entire instructional team (instructors and teaching assistants). Thus the project will create a new training program to prepare faculty and teaching assistants to administer oral exams that include considerations of issues such as bias and students with disabilities. The purpose of this study is to create a framework to integrate oral exams in core undergraduate engineering courses, complementing existing assessment strategies by (1) creating a guideline to optimize the oral exam design parameters for the best students learning outcomes; and (2) Create a new training program to prepare faculty and teaching assistants to administer oral exams. The project will implement an iterative design strategy using an evidence-based approach of evaluation. The effectiveness of the oral exams will be evaluated by tracking student improvements on conceptual questions across consecutive oral exams in a single course, as well as across other courses. Since its start in January 2021, the project is well underway. In this poster, we will present a summary of the results from year 1: (1) exploration of the oral exam design parameters, and its impact in students’ engagement and perception of oral exams towards learning; (2) the effectiveness of the newly developed instructor and teaching assistants training programs (3) The development of the evaluation instruments to gauge the project success; (4) instructors and teaching assistants experience and perceptions. 
    more » « less
  2. null (Ed.)
    The purpose of this study is to re-examine the validity evidence of the engineering design self-efficacy (EDSE) scale scores by Carberry et al. (2010) within the context of secondary education. Self-efficacy refers to individuals’ belief in their capabilities to perform a domain-specific task. In engineering education, significant efforts have been made to understand the role of self-efficacy for students considering its positive impact on student outcomes such as performance and persistence. These studies have investigated and developed measures for different domains of engineering self-efficacy (e.g., general academic, domain-general, and task-specific self-efficacy). The EDSE scale is a frequently cited measure that examines task-specific self-efficacy within the domain of engineering design. The original scale contains nine items that are intended to represent the engineering design process. Initial score validity evidence was collected using a sample consisting of 202 respondents with varying degrees of engineering experience including undergraduate/graduate students and faculty members. This scale has been primarily used by researchers and practitioners with engineering undergraduate students to assess changes in their engineering design self-efficacy as a result of active learning interventions, such as project-based learning. Our work has begun to experiment using the scale in a secondary education context in conjunction with an increased introduction to engineering in K-12 education. Yet, there still is a need to examine score validity and reliability of this scale in non-undergraduate populations such as secondary school student populations. This study fills this important gap by testing construct validity of the original nine items of the EDSE scale, supporting proper use of the scale for researchers and practitioners. This study was conducted as part of a larger, e4usa project investigating the development and implementation of a yearlong project-based engineering design course for secondary school students. Evidence of construct validity and reliability was collected using a multi-step process. First, a survey that includes the EDSE scale was administered to the project participating students at nine associated secondary schools across the US at the beginning of Spring 2020. Analysis of collected data is in progress and includes Exploratory Factor Analysis (EFA) on the 137 responses. The evidence of score reliability will be obtained by computing the internal consistency of each resulting factor. The resulting factor structure and items will be analyzed by comparing it with the original EDSE scale. The full paper will provide details about the psychometric evaluation of the EDSE scale. The findings from this paper will provide insights on the future usage of the EDSE scale in the context of secondary engineering education. 
    more » « less
  3. The purpose of the project is to identify how to measure various types of institutional support as it pertains to underrepresented and underserved populations in colleges of engineering and science. We are grounding this investigation in the Model of Co-Curricular Support, a conceptual framework that emphasizes the breadth of assistance currently used to support undergraduate students in engineering and science. The results from our study will help prioritize the elements of institutional support that should appear somewhere in a college’s suite of support efforts to improve engineering and science learning environments and design effective programs, activities, and services. Our poster will present: 1) an overview of the instrument development process; 2) evaluation of the prototype for face and content validity from students and experts; and 3) instrument revision and data collection to determine test validity and reliability across varied institutional contexts. In evaluating the initial survey, we included multiple rounds of feedback from students and experts, receiving feedback from 46 participants (38 students, 8 administrators). We intentionally sampled for representation across engineering and science colleges; gender identity; race/ethnicity; international student status; and transfer student status. The instrument was deployed for the first time in Spring 2018 to the institutional project partners at three universities. It was completed by 722 students: 598 from University 1, 51 from University 2, and 123 from University 3. We tested the construct validity of these responses using a minimum residuals exploratory factor analysis and correlation. A preliminary data analysis shows evidence of differences in perception on types of support college of engineering and college of science students experience. The findings of this preliminary analysis were used to revise the instrument further prior to the next round of testing. Our target sample for the next instrument deployment is 2,000 students, so we will survey ~13,000 students based on a 15% anticipated response rate. Following data collection, we will use confirmatory factor analysis to continue establishing construct validity and report on the stability of constructs emerging from our piloting on a new student sample(s). We will also investigate differences across these constructs by subpopulations of students. 
    more » « less
  4. The purpose of this study is to develop an instrument to measure student perceptions about the learning experiences in their online undergraduate engineering courses. Online education continues to grow broadly in higher education, but the movement toward acceptance and comprehensive utilization of online learning has generally been slower in engineering. Recently, however, there have been indicators that this could be changing. For example, ABET has accredited online undergraduate engineering degrees at Stony Brook University and Arizona State University (ASU), and an increasing number of other undergraduate engineering programs also offer online courses. During this period of transition in engineering education, further investigation about the online modality in the context of engineering education is needed, and survey instrumentation can support such investigations. The instrument presented in this paper is grounded in a Model for Online Course-level Persistence in Engineering (MOCPE), which was developed by our research team by combining two motivational frameworks used to study student persistence: the Expectancy x Value Theory of Achievement Motivation (EVT), and the ARCS model of motivational design. The initial MOCPE instrument contained 79 items related to students’ perceptions about the characteristics of their courses (i.e., the online learning management system, instructor practices, and peer support), expectancies of course success, course task values, perceived course difficulties, and intention to persist in the course. Evidence of validity and reliability was collected using a three-step process. First, we tested face and content validity of the instrument with experts in online engineering education and online undergraduate engineering students. Next, the survey was administered to the online undergraduate engineering student population at a large, Southwestern public university, and an exploratory factor analysis (EFA) was conducted on the responses. Lastly, evidence of reliability was obtained by computing the internal consistency of each resulting scale. The final instrument has seven scales with 67 items across 10 factors. The Cronbach alpha values for these scales range from 0.85 to 0.97. The full paper will provide complete details about the development and psychometric evaluation of the instrument, including evidence of and reliability. The instrument described in this paper will ultimately be used as part of a larger, National Science Foundation-funded project investigating the factors influencing online undergraduate engineering student persistence. It is currently being used in the context of this project to conduct a longitudinal study intended to understand the relationships between the experiences of online undergraduate engineering students in their courses and their intentions to persist in the course. We anticipate that the instrument will be of interest and use to other engineering education researchers who are also interested in studying the population of online students. 
    more » « less
  5. Adaptive comparative judgment (ACJ) is a holistic judgment approach used to evaluate the quality of something (e.g., student work) in which individuals are presented with pairs of work and select the better item from each pair. This approach has demonstrated high levels of reliability with less bias than other approaches, hence providing accurate values in summative and formative assessment in educational settings. Though ACJ itself has demonstrated significantly high reliability levels, relatively few studies have investigated the validity of peer-evaluated ACJ in the context of design thinking. This study explored peer-evaluation, facilitated through ACJ, in terms of construct validity and criterion validity (concurrent validity and predictive validity) in the context of a design thinking course. Using ACJ, undergraduate students ( n = 597) who took a design thinking course during Spring 2019 were invited to evaluate design point-of-view (POV) statements written by their peers. As a result of this ACJ exercise, each POV statement attained a specific parameter value, which reflects the quality of POV statements. In order to examine the construct validity, researchers conducted a content analysis, comparing the contents of the 10 POV statements with highest scores (parameter values) and the 10 POV statements with the lowest scores (parameter values)—as derived from the ACJ session. For the criterion validity, we studied the relationship between peer-evaluated ACJ and grader’s rubric-based grading. To study the concurrent validity, we investigated the correlation between peer-evaluated ACJ parameter values and grades assigned by course instructors for the same POV writing task. Then, predictive validity was studied by exploring if peer-evaluated ACJ of POV statements were predictive of students’ grades on the final project. Results showed that the contents of the statements with the highest parameter values were of better quality compared to the statements with the lowest parameter values. Therefore, peer-evaluated ACJ showed construct validity. Also, though peer-evaluated ACJ did not show concurrent validity, it did show moderate predictive validity. 
    more » « less