skip to main content


Title: Transformative Learning Networks: Guidelines and Insights for Netweavers
NSEC commissioned researchers to prepare four case studies to identify the opportunities and challenges of a learning network approach, with the purpose of informing NSEC's design. This report outlines the findings from those case studies. The report's primary audience are the designers and members of learning networks in the improving STEM education space.  more » « less
Award ID(s):
1524832
NSF-PAR ID:
10303651
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Network of STEM Education Centers
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    ABSTRACT Pandemic SARS-CoV-2 has ushered in a renewed interest in science along with rapid changes to educational modalities. While technology provides a variety of ways to convey learning resources, the incorporation of alternate modalities can be intimidating for those designing curricula. We propose strategies to permit rapid adaptation of curricula to achieve learning in synchronous, asynchronous, or hybrid learning environments. Case studies are a way to engage students in realistic scenarios that contextualize concepts and highlight applications in the life sciences. While case studies are commonly available and adaptable to course goals, the practical considerations of how to deliver and assess cases in online and blended environments can instill panic. Here we review existing resources and our collective experiences creating, adapting, and assessing case materials across different modalities. We discuss the benefits of using case studies and provide tips for implementation. Further, we describe functional examples of a three-step process to prepare cases with defined outcomes for individual student preparation, collaborative learning, and individual student synthesis to create an inclusive learning experience, whether in a traditional or remote learning environment. 
    more » « less
  2. Abstract Purpose The ability to identify the scholarship of individual authors is essential for performance evaluation. A number of factors hinder this endeavor. Common and similarly spelled surnames make it difficult to isolate the scholarship of individual authors indexed on large databases. Variations in name spelling of individual scholars further complicates matters. Common family names in scientific powerhouses like China make it problematic to distinguish between authors possessing ubiquitous and/or anglicized surnames (as well as the same or similar first names). The assignment of unique author identifiers provides a major step toward resolving these difficulties. We maintain, however, that in and of themselves, author identifiers are not sufficient to fully address the author uncertainty problem. In this study we build on the author identifier approach by considering commonalities in fielded data between authors containing the same surname and first initial of their first name. We illustrate our approach using three case studies. Design/methodology/approach The approach we advance in this study is based on commonalities among fielded data in search results. We cast a broad initial net—i.e., a Web of Science (WOS) search for a given author’s last name, followed by a comma, followed by the first initial of his or her first name (e.g., a search for ‘John Doe’ would assume the form: ‘Doe, J’). Results for this search typically contain all of the scholarship legitimately belonging to this author in the given database (i.e., all of his or her true positives), along with a large amount of noise, or scholarship not belonging to this author (i.e., a large number of false positives). From this corpus we proceed to iteratively weed out false positives and retain true positives. Author identifiers provide a good starting point—e.g., if ‘Doe, J’ and ‘Doe, John’ share the same author identifier, this would be sufficient for us to conclude these are one and the same individual. We find email addresses similarly adequate—e.g., if two author names which share the same surname and same first initial have an email address in common, we conclude these authors are the same person. Author identifier and email address data is not always available, however. When this occurs, other fields are used to address the author uncertainty problem. Commonalities among author data other than unique identifiers and email addresses is less conclusive for name consolidation purposes. For example, if ‘Doe, John’ and ‘Doe, J’ have an affiliation in common, do we conclude that these names belong the same person? They may or may not; affiliations have employed two or more faculty members sharing the same last and first initial. Similarly, it’s conceivable that two individuals with the same last name and first initial publish in the same journal, publish with the same co-authors, and/or cite the same references. Should we then ignore commonalities among these fields and conclude they’re too imprecise for name consolidation purposes? It is our position that such commonalities are indeed valuable for addressing the author uncertainty problem, but more so when used in combination. Our approach makes use of automation as well as manual inspection, relying initially on author identifiers, then commonalities among fielded data other than author identifiers, and finally manual verification. To achieve name consolidation independent of author identifier matches, we have developed a procedure that is used with bibliometric software called VantagePoint (see www.thevantagepoint.com) While the application of our technique does not exclusively depend on VantagePoint, it is the software we find most efficient in this study. The script we developed to implement this procedure is designed to implement our name disambiguation procedure in a way that significantly reduces manual effort on the user’s part. Those who seek to replicate our procedure independent of VantagePoint can do so by manually following the method we outline, but we note that the manual application of our procedure takes a significant amount of time and effort, especially when working with larger datasets. Our script begins by prompting the user for a surname and a first initial (for any author of interest). It then prompts the user to select a WOS field on which to consolidate author names. After this the user is prompted to point to the name of the authors field, and finally asked to identify a specific author name (referred to by the script as the primary author) within this field whom the user knows to be a true positive (a suggested approach is to point to an author name associated with one of the records that has the author’s ORCID iD or email address attached to it). The script proceeds to identify and combine all author names sharing the primary author’s surname and first initial of his or her first name who share commonalities in the WOS field on which the user was prompted to consolidate author names. This typically results in significant reduction in the initial dataset size. After the procedure completes the user is usually left with a much smaller (and more manageable) dataset to manually inspect (and/or apply additional name disambiguation techniques to). Research limitations Match field coverage can be an issue. When field coverage is paltry dataset reduction is not as significant, which results in more manual inspection on the user’s part. Our procedure doesn’t lend itself to scholars who have had a legal family name change (after marriage, for example). Moreover, the technique we advance is (sometimes, but not always) likely to have a difficult time dealing with scholars who have changed careers or fields dramatically, as well as scholars whose work is highly interdisciplinary. Practical implications The procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research, especially when the name under consideration is a more common family name. It is more effective when match field coverage is high and a number of match fields exist. Originality/value Once again, the procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research. It combines preexisting with more recent approaches, harnessing the benefits of both. Findings Our study applies the name disambiguation procedure we advance to three case studies. Ideal match fields are not the same for each of our case studies. We find that match field effectiveness is in large part a function of field coverage. Comparing original dataset size, the timeframe analyzed for each case study is not the same, nor are the subject areas in which they publish. Our procedure is more effective when applied to our third case study, both in terms of list reduction and 100% retention of true positives. We attribute this to excellent match field coverage, and especially in more specific match fields, as well as having a more modest/manageable number of publications. While machine learning is considered authoritative by many, we do not see it as practical or replicable. The procedure advanced herein is both practical, replicable and relatively user friendly. It might be categorized into a space between ORCID and machine learning. Machine learning approaches typically look for commonalities among citation data, which is not always available, structured or easy to work with. The procedure we advance is intended to be applied across numerous fields in a dataset of interest (e.g. emails, coauthors, affiliations, etc.), resulting in multiple rounds of reduction. Results indicate that effective match fields include author identifiers, emails, source titles, co-authors and ISSNs. While the script we present is not likely to result in a dataset consisting solely of true positives (at least for more common surnames), it does significantly reduce manual effort on the user’s part. Dataset reduction (after our procedure is applied) is in large part a function of (a) field availability and (b) field coverage. 
    more » « less
  3. The dearth of women and people of color in the field of computer science is a well-documented phenomenon. Following Obama's 2016 declaration of the need for a nationwide CS for All movement in the US, educators, school districts, states and the US-based National Science Foundation have responded with an explosion of activity directed at developing computer science learning opportunities in K-12 settings. A major component of this effort is the creation of equitable CS learning opportunities for underrepresented populations. As a result, there exists a strong need for educational research on the development of equity-based theory and practice in CS education. This poster session reports on a work-in-progress study that uses a case study approach to engage twenty in-service elementary school teachers in reflecting on issues of equity in CS education as part of a three-day CS professional development workshop. Our work is unfolding in the context of a four-year university/district research practice partnership in a mid-sized city in the Northeastern United States. Teachers in our project are working to co-design integrated CS curriculum units for K-5 classrooms. We developed four case studies, drawn from the first year of our project, that highlight equity challenges teachers faced in the classroom when implementing the CS lessons. The case studies follow the "Teacher Moments" template created by the Teaching Systems Lab in Open Learning at MIT. The case study activity is meant to deepen reflection and discussion on how to create equitable learning opportunities for elementary school students. We present preliminary findings. 
    more » « less
  4. Abstract

    Artificial grammar learning (AGL) paradigms have proven to be productive and useful to investigate how young infants break into the grammar of their native language(s). The question of when infants first show the ability to learn abstract grammatical rules has been central to theoretical debates about the innate vs. learned nature of grammar. The presence of this ability early in development, that is, before considerable experience with language, has been argued to provide evidence for a biologically endowed ability to acquire language. Artificial grammar learning tasks also allow infant populations to be readily compared with adults and non‐human animals. Artificial grammar learning paradigms with infants have been used to investigate a number of linguistic phenomena and learning tasks, from word segmentation to phonotactics and morphosyntax. In this review, we focus onAGLstudies testing infants’ ability to learn grammatical/structural properties of language. Specifically, we discuss the results ofAGLstudies focusing on repetition‐based regularities, the categorization of functors, adjacent and non‐adjacent dependencies, and word order. We discuss the implications of the results for a general theory of language acquisition, and we outline some of the open questions and challenges.

     
    more » « less
  5. null (Ed.)
    To preserve the stories of resiliency and document the infrastructure damages caused by Hurricanes Irma and María and the 2020 earthquakes in Puerto Rico, the timely collection of evidence is essential. To address this need, case studies of damages caused by the natural disasters and a repository of information aimed to keep record and centralize information regarding relevant cases that provide examples of evidence of infrastructure damages and processes worth preserving is needed. To develop said case studies and a repository, a two-prong approach was used in this study. First, the case study methodology was followed. According to Yin, a case study is “an intense study of a single unit with the purpose of a larger class of (similar) units”. Case studies are used in academia for both research and teaching purposes. Our research team advocates for the use of case studies as tools to inform both learning and decision-making. Secondly, the repository model was developed. This paper presents the results of the development of the repository and includes sample case studies. The repository allows students, academics, researchers, and other stakeholders to understand the impact of extreme environmental conditions on the built environment. Faculty can use the repository in their courses to teach Architecture, Engineering and Construction students topics related to resiliency and sustainability in the build environment. Each case study developed and deposited in the repository, answers to research questions regarding what, how and when the damages happened, who were the stakeholders involved in the processes, what were their actions, and what are the lessons learned. The case studies have the potential of becoming responses to hypotheses for those mining the repository. The paper contributes to the body of knowledge by presenting the results of the development of case studies and a database that can be used for both research and teaching purposes. These can be replicated in the US and other countries, in need of recording and systematizing information after natural events. 
    more » « less