skip to main content

Search for: All records

Creators/Authors contains: "Jordan, Michelle"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Background

    Small-group discussions are well established as an effective pedagogical tool to promote student learning in STEM classrooms. However, there are a variety of factors that influence how and to what extent K-12 teachers use small-group discussions in their classrooms, including both their own STEM content knowledge and their perceived ability to facilitate discussions. We designed the present study to specifically target these two factors in the context of photovoltaics, an interdisciplinary field at the intersection of all STEM disciplines with potential to yield widespread benefits related to the use of solar technologies as a sustainable, renewable energy source. Teachers engaged in a series of small-group discussions based on photovoltaic source material (e.g., scientific articles) to build both their STEM content knowledge and capability with discussions, promoting their potential to design and deliver STEM instruction in their own classrooms using small-group discussion.


    Overall, teachers productively engaged in rich STEM talk as they spent most of the time in the discussion asking authentic questions about photovoltaic topics in alignment with a variety of science and engineering disciplinary core ideas, responding to the questions with rich, elaborative talk, and taking on ownership of the discussions. Teachers also evidenced increases in their photovoltaic knowledge and their perceived capability to facilitate discussions. Finally, most teachers’ end-of-program lesson plans included the use of small-group discussions, and a subsample of teachers who completed a follow-up interview one year after the summer program reported greater enactment of discussion in their STEM classrooms.


    Our manuscript forwards an important contribution that draws from a practice-based approach to professional development in a way that not only better prepares teachers on what to teach (i.e., through enhanced PV content knowledge), but it also supports their ability to implement this instruction into their classrooms more effectively (i.e., though the use of small-group discussion). As such, this manuscript illustrates an innovative pedagogical approach for potential use in supporting teacher education and informs ways to enable teachers to build enhanced curricula for their STEM students.

    more » « less
  2. National Science Foundation (NSF) funded Engineering Research Centers (ERC) are required to develop and implement education and outreach opportunities related to their core technical research topics to broaden participation in engineering and create partnerships between industry and academia. Additionally, ERCs must include an independent evaluation of their education and outreach programming to assess their performance and impacts. To date, each ERC’s evaluation team designs its instruments/tools and protocols for evaluation, resulting in idiosyncratic and redundant efforts. Nonetheless, there is much overlap among the evaluation topics, concepts, and practices, suggesting that the ERC evaluation and assessment community might benefit from having a common set of instruments and protocols. ERCs’ efforts could then be better spent developing more specific, sophisticated, and time-intensive evaluation tools to deepen and enrich the overall ERC evaluation efforts. The implementation of such a suite of instruments would further allow each ERC to compare its efforts to those across other ERCs as one data point for assessing its effectiveness and informing its improvement efforts. Members of a multi-ERC collaborative team, funded by the NSF, have been leading a project developing a suite of common instruments and protocols which contains both quantitative and qualitative tools. This paper reports on the development of a set of qualitative instruments that, to date, includes the following: (a) a set of interview/focus group protocols intended for various groups of ERC personnel, centered around five common topics/areas, and (b) rubrics for summer program participants' verbal poster/presentations and their written poster/slide deck presentation artifacts. The development process is described sequentially, beginning with a review of relevant literature and existing instruments, followed by the creation of an initial set of interview questions and rubric criteria. The initial versions of the tools were then pilot-tested with multiple ERCs. Feedback sessions with education/evaluation leaders of those piloting ERCs were then conducted, through which further revision efforts were made. 
    more » « less
  3. National Science Foundation (NSF) funded Engineering Research Centers (ERC) must complement their technical research with various education and outreach opportunities to: 1) improve and promote engineering education, both within the center and to the local community; 2) encourage and include the underrepresented populations to participate in Engineering activities; and 3) advocate communication and collaboration between industry and academia. ERCs ought to perform an adequate evaluation of their educational and outreach programs to ensure that beneficial goals are met. Each ERC has complete autonomy in conducting and reporting such evaluation. Evaluation tools used by individual ERCs are quite similar, but each ERC has designed their evaluation processes in isolation, including evaluation tools such as survey instruments, interview protocols, focus group protocols, and/or observation protocols. These isolated efforts resulted in redundant resources spent and lacking outcome comparability across ERCs. Leaders from three different ERCs led and initiated a collaborative effort to address the above issue by building a suite of common evaluation instruments that all current and future ERCs can use. This leading group consists of education directors and external evaluators from all three partners ERCs and engineering education researchers, who have worked together for two years. The project intends to address the four ERC program clusters: Broadening Participation in Engineering, Centers and Networks, Engineering Education, and Engineering Workforce Development. The instruments developed will pay attention to culture of inclusion, outreach activities, mentoring experience, and sustained interest in engineering. The project will deliver best practices in education program evaluation, which will not only support existing ERCs, but will also serve as immediate tools for brand new ERCs and similar large-scale research centers. Expanding the research beyond TEEC and sharing the developed instruments with NSF as well as other ERCs will also promote and encourage continual cross-ERC collaboration and research. Further, the joint evaluation will increase the evaluation consistency across all ERC education programs. Embedded instrumental feedback loops will lead to continual improvement to ERC education performance and support the growth of an inclusive and innovative engineering workforce. Four major deliveries are planned. First, develop a common quantitative assessment instrument, named Multi-ERC Instrument Inventory (MERCII). Second, develop a set of qualitative instruments to complement MERCII. Third, create a web-based evaluation platform for MERCII. Fourth, update the NSF ERC education program evaluation best practice manual. These deliveries together will become part of and supplemented by an ERC evaluator toolbox. This project strives to significantly impact how ERCs evaluate their educational and outreach programs. Single ERC based studies lack the sample size to truly test the validity of any evaluation instruments or measures. A common suite of instruments across ERCs would provide an opportunity for a large scale assessment study. The online platform will further provide an easy-to-use tool for all ERCs to facilitate evaluation, share data, and reporting impacts. 
    more » « less
  4. The Engineering Research Centers (ERCs), funded by the National Science Foundation (NSF), play an important role in improving engineering education, bridging engineering academia and broad communities, and promoting a culture of diversity and inclusion. Each ERC must partner with an independent evaluation team to annually assess their performance and impact on progressing education, connecting community, and building diversified culture. This evaluation is currently performed independently (and in isolation), which leads to inconsistent evaluations and a redundant investment of ERCs’ resources into such tasks (e.g. developing evaluation instruments). These isolated efforts by ERCs to quantitatively evaluate their education programs also typically lack adequate sample size within a single center, which limits the validity and reliability of the quantitative analyses. Three ERCs, all associated with a large southwest university in the United States, worked collaboratively to overcome sample size and measure inconsistency concerns by developing a common quantitative instrument that is capable of evaluating any ERC’s education and diversity impacts. The instrument is the result of a systematic process with comparing and contrasting each ERC’s existing evaluation tools, including surveys and interview protocols. This new, streamlined tool captures participants’ overall experience as part of the ERC by measuring various constructs including skillset development, perception of diversity and inclusion, future plans after participating in the ERC, and mentorship received from the ERC. Scales and embedded items were designed broadly for possible use with both yearlong (e.g. graduate and undergraduate student, and postdoctoral scholars) and summer program (Research Experience for Undergraduates, Research Experience for Teachers, and Young Scholar Program) participants. The instrument was distributed and tested during Summer 2019 with participants in the summer programs from all three ERCs. The forthcoming paper will present the new common cross-ERC evaluation instrument, demonstrate the effort of collecting data across all three ERCs, present preliminary findings, and discuss collaborative processes and challenges. The preliminary implication for this work is the ability to directly compare educational programs across ERCs. The authors also believe that this tool can provide a fast start for new ERCs on how to evaluate their educational programs. 
    more » « less
  5. This Innovative Practice Work in Progress paper presents the collaborative efforts made by three NSF-funded Engineering Research Centers (ERCs) to synthesize common tools for educational program evaluation. The aim of the NSF ERCs is to achieve transformative changes by integrating engineering research and education with technological innovation within areas at the frontiers of science and engineering (e.g., NSF's 10 Big Ideas). Such centers across the nation study and innovate within their technical area using similar structures and implementation strategies, including the coordination of educational endeavors. Independent partners are enlisted as part of these centers to evaluate education and diversity impacts annually. Each center typically performs this task in isolation from other such centers. The effort required to create resources for such evaluation outcome can result in redundancy and an inability for psychometric analysis due to small available populations within a single center. This paper elaborates on the ongoing efforts of this collaborative research aimed at addressing these issues by creating a streamlined, customizable, and standardized set of evaluation instruments that can be applied to any ERC evaluation. 
    more » « less