This content will become publicly available on November 3, 2026
Title: WIP: Expert Feedback and Student Think-Alouds for the Learning through Making Instrument
Abstract—This WIP research paper presents validity evidence for a survey instrument designed to assess student learning in makerspaces. We report findings from expert reviews of item content and student interpretations of survey questions. The instrument was developed using a theory-driven approach to define constructs, followed by the development of questions aligned with those constructs. We solicited written feedback from 30 experts in instrument development and/or makerspaces, who rated the alignment of items with our constructs. Based on this input, we revised our items for clarity and consistency. We then conducted 25 cognitive interviews with a diverse group of students who use makerspaces, asking them to explain their understanding of each item and the reasoning behind their responses. Our recruitment ensured diversity in terms of race, gender, ethnicity, and academic background, extending beyond engineering majors. From our initial 45 items, we removed 6, modified 36, and added 1 based on expert feedback. During cognitive interviews, we began with 40 items, deleted one, and revised 23, resulting in 39 items for the pilot survey. Key findings included the value of examples in clarifying broad terms and improved student engagement with a revised rating scale—shifting from a 7-point Likert agreement scale to a self-description format encouraged fuller use of the scale. Our study contributes to the growing body of research on makerspaces by offering insights into how students describe their learning experiences and by providing initial validation evidence for a tool to assess those experiences, ultimately strengthening the credibility of the instrument. more »« less
Barnes, M. Elizabeth; Misheva, Taya; Supriya, K.; Rutledge, Michael; Brownell, Sara E.
(, CBE—Life Sciences Education)
Romine, William
(Ed.)
Hundreds of articles have explored the extent to which individuals accept evolution, and the Measure of Acceptance of the Theory of Evolution (MATE) is the most often used survey. However, research indicates the MATE has limitations, and it has not been updated since its creation more than 20 years ago. In this study, we revised the MATE using information from cognitive interviews with 62 students that revealed response process errors with the original instrument. We found that students answered items on the MATE based on constructs other than their acceptance of evolution, which led to answer choices that did not fully align with their actual acceptance. Students answered items based on their understanding of evolution and the nature of science and different definitions of evolution. We revised items on the MATE, conducted 29 cognitive interviews on the revised version, and administered it to 2881 students in 22 classes. We provide response process validity evidence for the new measure through cognitive interviews with students, structural validity through a Rasch dimensionality analysis, and concurrent validity evidence through correlations with other measures of evolution acceptance. Researchers can now measure student evolution acceptance using this new version of the survey, which we have called the MATE 2.0.
Lanci, Sarah; Nadelson, Louis; Villanueva, Idalis; Bouwma-Gearhart, Jana; Youmans, Kate; Lenz, Adam
(, American Society of Engineering Education)
Makerspaces have become a rather common structure within engineering education programs. The spaces are used in a wide range of configurations but are typically intended to facilitate student collaboration, communication, creativity, and critical thinking, essentially giving students the opportunity to learn 21st century skills and develop deeper understanding of the processes of engineering. Makerspace structure, layout, and use has been fairly well researched, yet the impact of makerspaces on student learning is understudied, somewhat per a lack of tools to measure student learning in these spaces. We developed a survey tool to assess undergraduate engineering students’ perceptions and learning in makerspaces, considering levels of students’ motivation, professional identity, engineering knowledge, and belongingness in the context of makerspaces. Our survey consists of multiple positively-phrased (supporting a condition) and some negatively-phrased (refuting a condition) survey items correlated to each of our four constructs. Our final survey contained 60 selected response items including demographic data. We vetted the instrument with an advisory panel for an additional level of validation and piloted the survey with undergraduate engineering students at two universities collecting completed responses from 196 participants. Our reliability analysis and additional statistical calculations revealed our tool was statistically sound and was effectively gathering the data we designed the instrument to measure.
This paper introduces the pilot implementation of the Evidence Based Personas survey instrument for assessing non-cognitive attributes of relevance from undergraduate students at different stages of their engineering degree for the purpose of informing proactive advising processes. The survey instrument was developed with two key objectives: first, to assess its potential for streamlining and shortening existing instruments, and second, to explore the possibility of consolidating items from different surveys that measure the same or closely related constructs. A proactive advising system is being developed that uses the Mediation Model of Research Experiences (MMRE) as a framework. Within this framework, participation in various educational activities is linked to increased Commitment to Engineering via three mediating parameters: Self-Efficacy, Teamwork/Leadership Self-Efficacy, and Engineering Identity. The existing, validated MMRE survey instrument was used as a starting point for development of the current instrument with a goal of streamlining / shortening the number of questions. Ultimately, we envision augmenting the shortened instrument with items related to broader non-cognitive and affective constructs from the SUCCESS instrument. Noting that both the MMRE and SUCCESS instruments include measures of Self-Efficacy and Engineering Identity, selected questions from both were included and compared. Data was collected from 395 total respondents, and subsequent data analysis was based on 337 valid participants. Factor Analysis techniques, both exploratory and confirmatory, were employed to uncover underlying or latent variables within the results, particularly in the areas of Self-Efficacy where the combined items of the SUCCESS instrument and the MMRE instrument were used. Cronbach’s alpha analysis was employed to assess the internal consistency of the survey instrument. The Teamwork, Engineering Identity, and Commitment to Engineering constructs all produced a Cronbach’s alpha value in excess of 0.80. The Self-Efficacy construct fell below the 0.80 threshold at 0.77 which is considered to be respectable but is indicative of some short comings compared to that of the other constructs. The results of the EFA four-factor pattern matrix show the SUCCESS instrument items breaking out into their own components while the MMRE items merge with some of the items from the Engineering Identity construct suggesting a distinction in the underlying concepts these items may be measuring. This finding is further supported in the CFA through an assessment of the Goodness of Fit (GFI), Tucker-Lewis Index (TLI), and Root Mean Square Error of Approximation (RMSEA) of these constructs. The initial groupings of the four constructs produced a robust CFI value of 0.853, robust TLI value of 0.838, and a robust RMSEA value of 0.075. Self-Efficacy is broken out into two sub-scales one defined by the three items from the SUCCESS instrument and the other defined by the four remaining items from the MMRE instrument. Engineering Identity was also broken into two sub-scales. The robust CFI and TLI report values of 0.928 and 0.919 respectively, and the robust RMSEA is reported to be 0.053. The findings of the factor analyses indicate that a shortened form of the MMRE survey instrument will provide reliable measures of the underlying constructs. Additionally, the results suggest that the self-efficacy as measured by items from the MMRE and from the SUCCESS instruments are related to two separate aspects of self-efficacy and do not load well into a single factor.
Verdín, Dina; Smith, Jessica M.; Lucena, Juan C.
(, Journal of Engineering Education)
Abstract BackgroundStudents who are the first in their families to attend college are an integral part of undergraduate engineering programs. Growing bodies of research argue that educators could better support these students if they understood the unique backgrounds, experiences, and knowledge they bring with them to higher education. Purpose/HypothesisThe purpose of this article is twofold. First, we identify salient funds of knowledge used by a group of first‐generation college students in their educational and work‐related experiences. Secondly, we use the funds of knowledge identified in our participants' experiences to create a survey instrument. Design/MethodA mixed methods approach was used. Ethnographic interview data of six first‐generation college students were used to hypothesize constructs and create survey items. Survey data were collected from 812 students. Exploratory and confirmatory factor analyses were used to verify the underlying theoretical structures among the survey items and hypothesized constructs. ResultsValidity evidence supported a 10‐factor model as opposed to the hypothesized 6‐factor model. The 10 latent constructs that make up the funds of knowledge instrument are as follows: tinkering knowledge from home, tinkering knowledge from work, connecting experiences, networks from family members, networks from college friends, networks from coworkers, networks from neighborhood friends, perspective taking, reading people, and mediating ability. ConclusionsRecognizing first‐generation college students' funds of knowledge is a first step to creating curricular spaces and experiences that better serve them. A survey scale allows educators to empirically examine how these accumulated bodies of knowledge are transmitted to capital, create advantages in engineering, and provides a useful tool to bridge students' knowledge in the classroom.
Abstract The transformative learning scale for the innovation mindset (TLSIM) is an instrument that effectively assesses both process-related experiences and outcome-oriented shifts in students’ self-awareness, open-mindedness, and innovation capabilities resulting from participation in innovation competitions and programs (ICPs), namely, experiential learning opportunities. It was developed using transformative learning theory (TLT) and the Kern Entrepreneurial Engineering Network’s (KEEN) 3Cs framework (Curiosity, Connections, and Creating Value). The study involved developing scale items, validating content and face validity through expert reviews and student focus groups, as well as conducting psychometric analysis using confirmatory factor analysis (CFA) on data collected from 291 STEM students (70.2% from engineering) who participated in ICPs. The CFA results showed strong factor loadings across most constructs, with Root Mean Square Error of Approximation (RMSEA) values within acceptable limits, confirming the robustness of the TLSIM for measuring both process-oriented (RMSEA = 0.047, CFI = 0.929) and outcome-oriented constructs (RMSEA = 0.052, CFI = 0.901) in the development of an innovation mindset. The analysis showed that TLSIM is a reliable and valid instrument with strong psychometric properties for measuring key constructs related to the innovation mindset. TLSIM can capture significant changes in students’ beliefs, attitudes, and self-perceptions regarding innovation. Future research should refine TLSIM across various disciplines.
Marcos, L, Nagel, R, Linsey, J, Alemán, M, Douglas, K, and Holloway, E. WIP: Expert Feedback and Student Think-Alouds for the Learning through Making Instrument. Retrieved from https://par.nsf.gov/biblio/10659307.
Marcos, L, Nagel, R, Linsey, J, Alemán, M, Douglas, K, & Holloway, E. WIP: Expert Feedback and Student Think-Alouds for the Learning through Making Instrument. Retrieved from https://par.nsf.gov/biblio/10659307.
Marcos, L, Nagel, R, Linsey, J, Alemán, M, Douglas, K, and Holloway, E.
"WIP: Expert Feedback and Student Think-Alouds for the Learning through Making Instrument". Country unknown/Code not available: IEEE Frontiers in Education. https://par.nsf.gov/biblio/10659307.
@article{osti_10659307,
place = {Country unknown/Code not available},
title = {WIP: Expert Feedback and Student Think-Alouds for the Learning through Making Instrument},
url = {https://par.nsf.gov/biblio/10659307},
abstractNote = {Abstract—This WIP research paper presents validity evidence for a survey instrument designed to assess student learning in makerspaces. We report findings from expert reviews of item content and student interpretations of survey questions. The instrument was developed using a theory-driven approach to define constructs, followed by the development of questions aligned with those constructs. We solicited written feedback from 30 experts in instrument development and/or makerspaces, who rated the alignment of items with our constructs. Based on this input, we revised our items for clarity and consistency. We then conducted 25 cognitive interviews with a diverse group of students who use makerspaces, asking them to explain their understanding of each item and the reasoning behind their responses. Our recruitment ensured diversity in terms of race, gender, ethnicity, and academic background, extending beyond engineering majors. From our initial 45 items, we removed 6, modified 36, and added 1 based on expert feedback. During cognitive interviews, we began with 40 items, deleted one, and revised 23, resulting in 39 items for the pilot survey. Key findings included the value of examples in clarifying broad terms and improved student engagement with a revised rating scale—shifting from a 7-point Likert agreement scale to a self-description format encouraged fuller use of the scale. Our study contributes to the growing body of research on makerspaces by offering insights into how students describe their learning experiences and by providing initial validation evidence for a tool to assess those experiences, ultimately strengthening the credibility of the instrument.},
journal = {},
publisher = {IEEE Frontiers in Education},
author = {Marcos, L and Nagel, R and Linsey, J and Alemán, M and Douglas, K and Holloway, E},
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.