skip to main content

Title: Practical Rubrics for Informal Science Education Studies: (1) a STEM Research Design Rubric for Assessing Study Design and a (2) STEM Impact Rubric for Measuring Evidence of Impact
Informal learning institutions, such as museums, science centers, and community-based organizations, play a critical role in providing opportunities for students to engage in science, technology, engineering, and mathematics (STEM) activities during out-of-school time hours. In recent years, thousands of studies, evaluations, and conference proceedings have been published measuring the impact that these programs have had on their participants. However, because studies of informal science education (ISE) programs vary considerably in how they are designed and in the quality of their designs, it is often quite difficult to assess their impact on participants. Knowing whether the outcomes reported by these studies are supported with sufficient evidence is important not only for maximizing participant impact, but also because there are considerable economic and human resources invested to support informal learning initiatives. To address this problem, I used the theories of impact analysis and triangulation as a framework for developing user-friendly rubrics for assessing quality of research designs and evidence of impact. I used two main sources, research-based recommendations from STEM governing bodies and feedback from a focus group, to identify criteria indicative of high-quality STEM research and study design. Accordingly, I developed three STEM Research Design Rubrics, one for quantitative studies, one more » for qualitative studies, and another for mixed methods studies, that can be used by ISE researchers, practitioners, and evaluators to assess research design quality. Likewise, I developed three STEM Impact Rubrics, one for quantitative studies, one for qualitative studies, and another for mixed methods studies, that can be used by ISE researchers, practitioners, and evaluators to assess evidence of outcomes. The rubrics developed in this study are practical tools that can be used by ISE researchers, practitioners, and evaluators to improve the field of informal science learning by increasing the quality of study design and for discerning whether studies or program evaluations are providing sufficient evidence of impact. « less
Authors:
Award ID(s):
1710792
Publication Date:
NSF-PAR ID:
10224889
Journal Name:
Frontiers in Education
Volume:
5
ISSN:
2504-284X
Sponsoring Org:
National Science Foundation
More Like this
  1. Researchers, evaluators and designers from an array of academic disciplines and industry sectors are turning to participatory approaches as they seek to understand and address complex social problems. We refer to participatory approaches that collaboratively engage/ partner with stakeholders in knowledge creation/problem solving for action/social change outcomes as collaborative change research, evaluation and design (CCRED). We further frame CCRED practitioners by their desire to move beyond knowledge creation for its own sake to implementation of new knowledge as a tool for social change. In March and May of 2018, we conducted a literature search of multiple discipline-specific databases seeking collaborative, change-oriented scholarly publications. The search was limited to include peerreviewed journal articles, with English language abstracts available, published in the last five years. The search resulted in 526 citations, 236 of which met inclusion criteria. Though the search was limited to English abstracts, all major geographic regions (North America, Europe, Latin America/Caribbean, APAC, Africa and the Middle East) were represented within the results, although many articles did not state a specific region. Of those identified, most studies were located in North America, with the Middle East having only one identified study. We followed a qualitative thematic synthesis process to examinemore »the abstracts of peer-reviewed articles to identify practices that transcend individual disciplines, sectors and contexts to achieve collaborative change. We surveyed the terminology used to describe CCRED, setting, content/topic of study, type of collaboration, and related benefits/outcomes in order to discern the words used to designate collaboration, the frameworks, tools and methods employed, and the presence of action, evaluation or outcomes. Forty-three percent of the reviewed articles fell broadly within the social sciences, followed by 26 percent in education and 25 percent in health/medicine. In terms of participants and/ or collaborators in the articles reviewed, the vast majority of the 236 articles (86%) described participants, that is, those who the research was about or from whom data was collected. In contrast to participants, partners/collaborators (n=32; 14%) were individuals or groups who participated in the design or implementation of the collaborative change effort described. In terms of the goal for collaboration and/or for doing the work, the most frequently used terminology related to some aspect of engagement and empowerment. Common descriptors for the work itself were ‘social change’ (n=74; 31%), ‘action’ (n=33; 14%), ‘collaborative or participatory research/practice’ (n=13; 6%), ‘transformation’ (n=13; 6%) and ‘community engagement’ (n=10; 4%). Of the 236 articles that mentioned a specific framework or approach, the three most common were some variation of Participatory Action Research (n=30; 50%), Action Research (n=40; 16.9%) or Community-Based Participatory Research (n=17; 7.2%). Approximately a third of the 236 articles did not mention a specific method or tool in the abstract. The most commonly cited method/tool (n=30; 12.7%) was some variation of an arts-based method followed by interviews (n=18; 7.6%), case study (n=16; 6.7%), or an ethnographic-related method (n=14; 5.9%). While some articles implied action or change, only 14 of the 236 articles (6%) stated a specific action or outcome. Most often, the changes described were: the creation or modification of a model, method, process, framework or protocol (n=9; 4%), quality improvement, policy change and social change (n=8; 3%), or modifications to education/training methods and materials (n=5; 2%). The infrequent use of collaboration as a descriptor of partner engagement, coupled with few reported findings of measurable change, raises questions about the nature of CCRED. It appears that conducting CCRED is as complex an undertaking as the problems that the work is attempting to address.« less
  2. Despite efforts to diversify the engineering workforce, the field remains dominated by White, male engineers. Research shows that underrepresented groups, including women and minorities, are less likely to identify and engage with scientific texts and literacy practices. Often, children of minority groups and/or working-class families do not receive the same kinds of exposure to science, technology, engineering, and mathematics (STEM) knowledge and practices as those from majority groups. Consequently, these children are less likely to engage in school subjects that provide pathways to engineering careers. Therefore, to mitigate the lack of diversity in engineering, new approaches able to broadly support engineering literacy are needed. One promising approach is disciplinary literacy instruction (DLI). DLI is a method for teaching students how advanced practitioners in a given field generate, interpret, and evaluate discipline-specific texts. DLI helps teachers provide access to to high quality, discipline-specific content to all students, regardless of race, ethnicity, gender, or socio-economic status, Therefore, DLI has potential to reduce literacy-based barriers that discourage underrepresented students from pursuing engineering careers. While models of DLI have been developed and implemented in history, science, and mathematics, little is known about DLI in engineering. The purpose of this research is to identify themore »authentic texts, practices, and evaluative frameworks employed by professional engineers to inform a model of DLI in engineering. While critiques of this approach may suggest that a DLI model will reflect the literacy practices of majority engineering groups, (i.e., White male engineers), we argue that a DLI model can directly empower diverse K-16 students to become engineers by instructing them in the normed knowledge and practices of engineering. This paper presents a comparative case study conducted to investigate the literacy practices of electrical and mechanical engineers. We scaffolded our research using situated learning theory and rhetorical genre studies and considered the engineering profession as a community of practice. We generated multiple types of data with four participants (i.e., two electrical and two mechanical engineers). Specifically, we generated qualitative data, including written field notes of engineer observations, interview transcripts, think-aloud protocols, and engineer logs of literacy practices. We used constant comparative analysis (CCA) coding techniques to examine how electrical and mechanical engineers read, wrote, and evaluated texts to identify the frameworks that guide their literacy practices. We then conducted within-group and cross-group constant comparative analyses (CCA) to compare and contrast the literacy practices specific to each sub-discipline Findings suggest that there are two types of engineering literacy practices: those that resonate across both mechanical and electrical engineering disciplines and those that are specific to each discipline. For example, both electrical and mechanical engineers used test procedures to review and assess steps taken to evaluate electrical or mechanical system performance. In contrast, engineers from the two sub-disciplines used different forms of representation when depicting components and arrangements of engineering systems. While practices that are common across sub-disciplines will inform a model of DLI in engineering for K-12 settings, discipline-specific practices can be used to develop and/or improve undergraduate engineering curricula.« less
  3. Research prior to 2005 found that no single framework existed that could capture the engineering design process fully or well and benchmark each element of the process to a commonly accepted set of referenced artifacts. Compounding the construction of a stepwise, artifact driven framework is that engineering design is typically practiced over time as a complex and iterative process. For both novice and advanced students, learning and applying the design process is often cumulative, with many informal and formal programmatic opportunities to practice essential elements. The Engineering Design Process Portfolio Scoring Rubric (EDPPSR) was designed to apply to any portfolio that is intended to document an individual or team driven process leading to an original attempt to design a product, process, or method to provide the best and most optimal solution to a genuine and meaningful problem. In essence, the portfolio should be a detailed account or “biography” of a project and the thought processes that inform that project. Besides narrative and explanatory text, entries may include (but need not be limited to) drawings, schematics, photographs, notebook and journal entries, transcripts or summaries of conversations and interviews, and audio/video recordings. Such entries are likely to be necessary in order tomore »convey accurately and completely the complex thought processes behind the planning, implementation, and self-evaluation of the project. The rubric is comprised of four main components, each in turn comprised of three elements. Each element has its own holistic rubric. The process by which the EDPPSR was created gives evidence of the relevance and representativeness of the rubric and helps to establish validity. The EDPPSR model as originally rendered has a strong theoretical foundation as it has been developed by reference to the literature on the steps of the design process through focus groups and through expert review by teachers, faculty and researchers in performance based, portfolio rubrics and assessments. Using the unified construct validity framework, the EDDPSR’s validity was further established through expert reviewers (experts in engineering design) providing evidence supporting the content relevance and representativeness of the EDPPSR in representing the basic process of engineering design. This manuscript offers empirical evidence that supports the use of the EDPPSR model to evaluate student design-based projects in a reliable and valid manner. Intra-class correlation coefficients (ICC) were calculated to determine the inter-rater reliability (IRR) of the rubric. Given the small sample size we also examined confidence intervals (95%) to provide a range of values in which the estimate of inter-reliability is likely contained.« less
  4. Need/Motivation (e.g., goals, gaps in knowledge) The ESTEEM implemented a STEM building capacity project through students’ early access to a sustainable and innovative STEM Stepping Stones, called Micro-Internships (MI). The goal is to reap key benefits of a full-length internship and undergraduate research experiences in an abbreviated format, including access, success, degree completion, transfer, and recruiting and retaining more Latinx and underrepresented students into the STEM workforce. The MIs are designed with the goals to provide opportunities for students at a community college and HSI, with authentic STEM research and applied learning experiences (ALE), support for appropriate STEM pathway/career, preparation and confidence to succeed in STEM and engage in summer long REUs, and with improved outcomes. The MI projects are accessible early to more students and build momentum to better overcome critical obstacles to success. The MIs are shorter, flexibly scheduled throughout the year, easily accessible, and participation in multiple MI is encouraged. ESTEEM also establishes a sustainable and collaborative model, working with partners from BSCS Science Education, for MI’s mentor, training, compliance, and building capacity, with shared values and practices to maximize the improvement of student outcomes. New Knowledge (e.g., hypothesis, research questions) Research indicates that REU/internship experiences canmore »be particularly powerful for students from Latinx and underrepresented groups in STEM. However, those experiences are difficult to access for many HSI-community college students (85% of our students hold off-campus jobs), and lack of confidence is a barrier for a majority of our students. The gap between those who can and those who cannot is the “internship access gap.” This project is at a central California Community College (CCC) and HSI, the only affordable post-secondary option in a region serving a historically underrepresented population in STEM, including 75% Hispanic, and 87% have not completed college. MI is designed to reduce inequalities inherent in the internship paradigm by providing access to professional and research skills for those underserved students. The MI has been designed to reduce barriers by offering: shorter duration (25 contact hours); flexible timing (one week to once a week over many weeks); open access/large group; and proximal location (on-campus). MI mentors participate in week-long summer workshops and ongoing monthly community of practice with the goal of co-constructing a shared vision, engaging in conversations about pedagogy and learning, and sustaining the MI program going forward. Approach (e.g., objectives/specific aims, research methodologies, and analysis) Research Question and Methodology: We want to know: How does participation in a micro-internship affect students’ interest and confidence to pursue STEM? We used a mixed-methods design triangulating quantitative Likert-style survey data with interpretive coding of open-responses to reveal themes in students’ motivations, attitudes toward STEM, and confidence. Participants: The study sampled students enrolled either part-time or full-time at the community college. Although each MI was classified within STEM, they were open to any interested student in any major. Demographically, participants self-identified as 70% Hispanic/Latinx, 13% Mixed-Race, and 42 female. Instrument: Student surveys were developed from two previously validated instruments that examine the impact of the MI intervention on student interest in STEM careers and pursuing internships/REUs. Also, the pre- and post (every e months to assess longitudinal outcomes) -surveys included relevant open response prompts. The surveys collected students’ demographics; interest, confidence, and motivation in pursuing a career in STEM; perceived obstacles; and past experiences with internships and MIs. 171 students responded to the pre-survey at the time of submission. Outcomes (e.g., preliminary findings, accomplishments to date) Because we just finished year 1, we lack at this time longitudinal data to reveal if student confidence is maintained over time and whether or not students are more likely to (i) enroll in more internships, (ii) transfer to a four-year university, or (iii) shorten the time it takes for degree attainment. For short term outcomes, students significantly Increased their confidence to continue pursuing opportunities to develop within the STEM pipeline, including full-length internships, completing STEM degrees, and applying for jobs in STEM. For example, using a 2-tailed t-test we compared means before and after the MI experience. 15 out of 16 questions that showed improvement in scores were related to student confidence to pursue STEM or perceived enjoyment of a STEM career. Finding from the free-response questions, showed that the majority of students reported enrolling in the MI to gain knowledge and experience. After the MI, 66% of students reported having gained valuable knowledge and experience, and 35% of students spoke about gaining confidence and/or momentum to pursue STEM as a career. Broader Impacts (e.g., the participation of underrepresented minorities in STEM; development of a diverse STEM workforce, enhanced infrastructure for research and education) The ESTEEM project has the potential for a transformational impact on STEM undergraduate education’s access and success for underrepresented and Latinx community college students, as well as for STEM capacity building at Hartnell College, a CCC and HSI, for students, faculty, professionals, and processes that foster research in STEM and education. Through sharing and transfer abilities of the ESTEEM model to similar institutions, the project has the potential to change the way students are served at an early and critical stage of their higher education experience at CCC, where one in every five community college student in the nation attends a CCC, over 67% of CCC students identify themselves with ethnic backgrounds that are not White, and 40 to 50% of University of California and California State University graduates in STEM started at a CCC, thus making it a key leverage point for recruiting and retaining a more diverse STEM workforce.« less
  5. Student-retention theories traditionally focus on institutional retention, even though efforts to support students in science, technology, engineering, and mathematics (STEM) occur at the college level. This study bridges this gap between research and practice by extending and empirically testing the Model of Co-Curricular Support (MCCS), which specifically focuses on supporting and retaining underrepresented groups in STEM. The MCCS is a student-retention model that demonstrates the breadth of assistance currently used to support undergraduate students in STEM, particularly those from underrepresented groups. The aim of this exploratory research is to develop and validate a survey instrument grounded in the MCCS that can be used by college administrators and student-support practitioners to assess the magnitude of institutional support received by undergraduate students in STEM. To date, such an instrument does not exist. Our poster will present a summary of the instrument development process that has occurred to date. We are developing the survey following best practices outlined in the literature. We are clearly defining the construct of interest and target population; reviewing related tests; developing the prototype of the survey instrument; evaluating the prototype for face and content validity from students and experts; revising and testing based on suggestion; and collecting datamore »to determine test validity and reliability across four institutional contexts. Our institutional sample sites were purposefully selected because of their large size and diversity with respect to undergraduates in STEM. The results from our study will help prioritize the elements of institutional support that should appear somewhere in a college’s suite of support efforts. Our study will provide scientific evidence that STEM researchers, educators, administrators, and policy makers need to make informed decisions to improve STEM learning environments and design effective programs, activities, and services.« less