skip to main content


Title: Flat teams drive scientific innovation
With teams growing in all areas of scientific and scholarly research, we explore the relationship between team structure and the character of knowledge they produce. Drawing on 89,575 self-reports of team member research activity underlying scientific publications, we show how individual activities cohere into broad roles of 1) leadership through the direction and presentation of research and 2) support through data collection, analysis, and discussion. The hidden hierarchy of a scientific team is characterized by its lead (or L) ratio of members playing leadership roles to total team size. The L ratio is validated through correlation with imputed contributions to the specific paper and to science as a whole, which we use to effectively extrapolate the L ratio for 16,397,750 papers where roles are not explicit. We find that, relative to flat, egalitarian teams, tall, hierarchical teams produce less novelty and more often develop existing ideas, increase productivity for those on top and decrease it for those beneath, and increase short-term citations but decrease long-term influence. These effects hold within person—the same person on the same-sized team produces science much more likely to disruptively innovate if they work on a flat, high-L-ratio team. These results suggest the critical role flat teams play for sustainable scientific advance and the training and advancement of scientists.  more » « less
Award ID(s):
1800956
NSF-PAR ID:
10381172
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the National Academy of Sciences
Volume:
119
Issue:
23
ISSN:
0027-8424
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Obeid, Iyad ; Picone, Joseph ; Selesnick, Ivan (Ed.)
    The Neural Engineering Data Consortium (NEDC) is developing a large open source database of high-resolution digital pathology images known as the Temple University Digital Pathology Corpus (TUDP) [1]. Our long-term goal is to release one million images. We expect to release the first 100,000 image corpus by December 2020. The data is being acquired at the Department of Pathology at Temple University Hospital (TUH) using a Leica Biosystems Aperio AT2 scanner [2] and consists entirely of clinical pathology images. More information about the data and the project can be found in Shawki et al. [3]. We currently have a National Science Foundation (NSF) planning grant [4] to explore how best the community can leverage this resource. One goal of this poster presentation is to stimulate community-wide discussions about this project and determine how this valuable resource can best meet the needs of the public. The computing infrastructure required to support this database is extensive [5] and includes two HIPAA-secure computer networks, dual petabyte file servers, and Aperio’s eSlide Manager (eSM) software [6]. We currently have digitized over 50,000 slides from 2,846 patients and 2,942 clinical cases. There is an average of 12.4 slides per patient and 10.5 slides per case with one report per case. The data is organized by tissue type as shown below: Filenames: tudp/v1.0.0/svs/gastro/000001/00123456/2015_03_05/0s15_12345/0s15_12345_0a001_00123456_lvl0001_s000.svs tudp/v1.0.0/svs/gastro/000001/00123456/2015_03_05/0s15_12345/0s15_12345_00123456.docx Explanation: tudp: root directory of the corpus v1.0.0: version number of the release svs: the image data type gastro: the type of tissue 000001: six-digit sequence number used to control directory complexity 00123456: 8-digit patient MRN 2015_03_05: the date the specimen was captured 0s15_12345: the clinical case name 0s15_12345_0a001_00123456_lvl0001_s000.svs: the actual image filename consisting of a repeat of the case name, a site code (e.g., 0a001), the type and depth of the cut (e.g., lvl0001) and a token number (e.g., s000) 0s15_12345_00123456.docx: the filename for the corresponding case report We currently recognize fifteen tissue types in the first installment of the corpus. The raw image data is stored in Aperio’s “.svs” format, which is a multi-layered compressed JPEG format [3,7]. Pathology reports containing a summary of how a pathologist interpreted the slide are also provided in a flat text file format. A more complete summary of the demographics of this pilot corpus will be presented at the conference. Another goal of this poster presentation is to share our experiences with the larger community since many of these details have not been adequately documented in scientific publications. There are quite a few obstacles in collecting this data that have slowed down the process and need to be discussed publicly. Our backlog of slides dates back to 1997, meaning there are a lot that need to be sifted through and discarded for peeling or cracking. Additionally, during scanning a slide can get stuck, stalling a scan session for hours, resulting in a significant loss of productivity. Over the past two years, we have accumulated significant experience with how to scan a diverse inventory of slides using the Aperio AT2 high-volume scanner. We have been working closely with the vendor to resolve many problems associated with the use of this scanner for research purposes. This scanning project began in January of 2018 when the scanner was first installed. The scanning process was slow at first since there was a learning curve with how the scanner worked and how to obtain samples from the hospital. From its start date until May of 2019 ~20,000 slides we scanned. In the past 6 months from May to November we have tripled that number and how hold ~60,000 slides in our database. This dramatic increase in productivity was due to additional undergraduate staff members and an emphasis on efficient workflow. The Aperio AT2 scans 400 slides a day, requiring at least eight hours of scan time. The efficiency of these scans can vary greatly. When our team first started, approximately 5% of slides failed the scanning process due to focal point errors. We have been able to reduce that to 1% through a variety of means: (1) best practices regarding daily and monthly recalibrations, (2) tweaking the software such as the tissue finder parameter settings, and (3) experience with how to clean and prep slides so they scan properly. Nevertheless, this is not a completely automated process, making it very difficult to reach our production targets. With a staff of three undergraduate workers spending a total of 30 hours per week, we find it difficult to scan more than 2,000 slides per week using a single scanner (400 slides per night x 5 nights per week). The main limitation in achieving this level of production is the lack of a completely automated scanning process, it takes a couple of hours to sort, clean and load slides. We have streamlined all other aspects of the workflow required to database the scanned slides so that there are no additional bottlenecks. To bridge the gap between hospital operations and research, we are using Aperio’s eSM software. Our goal is to provide pathologists access to high quality digital images of their patients’ slides. eSM is a secure website that holds the images with their metadata labels, patient report, and path to where the image is located on our file server. Although eSM includes significant infrastructure to import slides into the database using barcodes, TUH does not currently support barcode use. Therefore, we manage the data using a mixture of Python scripts and manual import functions available in eSM. The database and associated tools are based on proprietary formats developed by Aperio, making this another important point of community-wide discussion on how best to disseminate such information. Our near-term goal for the TUDP Corpus is to release 100,000 slides by December 2020. We hope to continue data collection over the next decade until we reach one million slides. We are creating two pilot corpora using the first 50,000 slides we have collected. The first corpus consists of 500 slides with a marker stain and another 500 without it. This set was designed to let people debug their basic deep learning processing flow on these high-resolution images. We discuss our preliminary experiments on this corpus and the challenges in processing these high-resolution images using deep learning in [3]. We are able to achieve a mean sensitivity of 99.0% for slides with pen marks, and 98.9% for slides without marks, using a multistage deep learning algorithm. While this dataset was very useful in initial debugging, we are in the midst of creating a new, more challenging pilot corpus using actual tissue samples annotated by experts. The task will be to detect ductal carcinoma (DCIS) or invasive breast cancer tissue. There will be approximately 1,000 images per class in this corpus. Based on the number of features annotated, we can train on a two class problem of DCIS or benign, or increase the difficulty by increasing the classes to include DCIS, benign, stroma, pink tissue, non-neoplastic etc. Those interested in the corpus or in participating in community-wide discussions should join our listserv, nedc_tuh_dpath@googlegroups.com, to be kept informed of the latest developments in this project. You can learn more from our project website: https://www.isip.piconepress.com/projects/nsf_dpath. 
    more » « less
  2. An authentic, interdisciplinary, research and problem-based integrated science, technology, engineering, and mathematics (STEM) project may be ideal for encouraging scientific inquiry and developing teamwork among undergraduate students, but it also presents challenges. The authors describe how two interdisciplinary teams (n=6) of undergraduate college students built integrated STEM projects in a research based internship setting, and then collaboratively brought the project to fruition to include designing lessons and activities shared with K-12 students in a classroom setting. Each three person undergraduate team consisted of two STEM majors and one Education major. The Education majors are a special focus for this study. Interviews, field observations, and lesson plan artifacts collected from the undergraduate college students were analyzed according to authenticity factors, the authentic scientific inquiry instrument, and an integrated STEM instrument. The authors highlight areas of strength and weakness for both teams and explore how preservice teachers contributed to integrated STEM products and lessons. Teacher educators might apply recommendations for teacher preparation and professional development when facilitating authentic scientific inquiry and integrated STEM topics with both STEM and non-STEM educators. Undergraduate college students were challenged to fully integrate the STEM disciplines, transitions between them, and the spaces between them where multiple disciplines existed. By describing the challenges of integrating the spaces between STEM, the authors offer a description of the undergraduate college students’ experiences in an effort to expand the common message beyond a flat approach of try this activity because it works, to a more robust message of try this type of engagement and purposefully organize for maximum results. 
    more » « less
  3. Our NSF funded project—Creating National Leadership Cohorts to Make Academic Change Happen (NSF 1649318)—represents a strategic partnership between researchers and practitioners in the domain of academic change. The principle investigators from the Making Academic Change Happen team from Rose-Hulman Institute of Technology provide familiarity with the literature of practical organizational change and package this into action-oriented workshops and ongoing support for teams funded through the REvolutionizing engineering and computer science Departments (RED) program. The PIs from the Center for Evaluation & Research for STEM Equity at the University of Washington provide expertise in social science research in order to investigate how the the RED teams’ change projects unfold and how the teams develop as members of national leadership cohorts for change in engineering and computer science education. Our poster for ASEE 2018 will focus on what we have learned thus far regarding the dynamics of the researcher/practitioner partnership through the RED Participatory Action Research (REDPAR) Project. According to Worrall (2007), good partnerships are “founded on trust, respect, mutual benefit, good communities, and governance structures that allow democratic decision-making, process improvement, and resource sharing.” We have seen these elements emerge through the work of the partnership to create mutual benefits. For example, the researchers have been given an “insider’s” perspective on the practitioners’ approach—their goals, motivations for certain activities, and background information and research. The practitioners’ perspective is useful for the researchers to learn since the practitioners’ familiarity with the organizational change literature has influenced the researchers’ questions and theoretical models. The practitioners’ work with the RED teams has provided insights on the teams, how they are operating, the challenges they face, and aspects of the teams’ work that may not be readily available to the researchers. As a result, the researchers have had increased access to the teams to collect data. The researchers, in turn, have been able to consider how to make their analyses useful and actionable for change-makers, the population that the practitioners are more familiar with. Insights from the researchers provide both immediate and long-term benefits to programming and increased professional impact. The researchers are trained observers, each of whom brings a unique disciplinary perspective to their observations. The richness, depth, and clarity of their observations adds immeasurably to the quality of practitioners’ interactions with the RED teams. The practitioners, for example, have revised workshop content in response to the researchers’ observations, thus ensuring that the workshop content serves the needs of the RED teams. The practitioners also benefit from the joint effort on dissemination, since they can contribute to a variety of dissemination efforts (journal papers, conference presentations, workshops). We plan to share specific examples of the strategic partnership during the poster session. In doing so, we hope to encourage researchers to seek out partnerships with practitioners in order to bridge the gap between theory and practice in engineering and computer science education. 
    more » « less
  4. There have been many initiatives to improve the experiences of marginalized engineering students in order to increase their desire to pursue the field of engineering. However, despite these efforts, workforce numbers indicate lingering disparities. Representation in the science and engineering workforce is low with women comprising only 16% of those in science and engineering occupations in 2019, and underrepresented minorities (e.g., Black, Hispanic, and American Indian/Alaskan Native) collectively representing only approximately 20% (National Center for Science and Engineering Statistics [NCSES], 2022). Additionally, engineering has historically held cultural values that can exclude marginalized populations. Cech (2013) argues that engineering has supported a meritocratic ideology in which intelligence is something that you are born with rather than something you can gain. Engineering, she argues, is riddled with meritocratic regimens that include such common practices as grading on a curve and “weeding” out students in courses.Farrell et al. (2021) discuss how engineering culture is characterized by elitism through practices of epistemological dominance (devaluing other ways of knowing), majorism (placing higher value on STEM over the liberal arts), and technical social dualism (the belief that issues of diversity, equity, and inclusion should not be part of engineering). These ideologies can substantially affect the persistence of both women and people of color–populations historically excluded in engineering, because their concerns and/or cultural backgrounds are not validated by instructors or other peers which reproduces inequality. Improving student-faculty interactions through engineering professional development is one way to counteract these harmful cultural ideologies to positively impact and increase the participation of marginalized engineering students. STEM reform initiatives focused on faculty professional development, such as the NSF INCLUDES Aspire Alliance (Aspire), seek to prepare and educate faculty to integrate inclusive practices across their various campus roles and responsibilities as they relate to teaching, advising, research mentoring, collegiality, and leadership. The Aspire Summer Institute (ASI) has been one of Aspire’s most successful programs. The ASI is an intensive, week-long professional development event focused on educating institutional teams on the Inclusive Professional Framework (IPF) and how to integrate its components, individually and as teams, to improve STEM faculty inclusive behaviors. The IPF includes the domains of identity, intercultural awareness, and relational skill-building (Gillian-Daniel et al., 2021). Identity involves understanding not only your personal cultural identity but that of students and the impact of identity in learning spaces. Intercultural awareness involves instructors being able to navigate cultural interactions in a positive way as they consider the diverse backgrounds of students, while recognizing their own privileges and biases. Relational involves creating trusting relationships and a positive communication flow between instructors and students. The ASI and IPF can be used to advance a more inclusive environment for marginalized students in engineering. In this paper, we discuss the success of the ASI and how the institute and the IPF could be adapted specifically to support engineering faculty in their teaching, mentoring, and advising. 
    more » « less
  5. null (Ed.)
    The DeepLearningEpilepsyDetectionChallenge: design, implementation, andtestofanewcrowd-sourced AIchallengeecosystem Isabell Kiral*, Subhrajit Roy*, Todd Mummert*, Alan Braz*, Jason Tsay, Jianbin Tang, Umar Asif, Thomas Schaffter, Eren Mehmet, The IBM Epilepsy Consortium◊ , Joseph Picone, Iyad Obeid, Bruno De Assis Marques, Stefan Maetschke, Rania Khalaf†, Michal Rosen-Zvi† , Gustavo Stolovitzky† , Mahtab Mirmomeni† , Stefan Harrer† * These authors contributed equally to this work † Corresponding authors: rkhalaf@us.ibm.com, rosen@il.ibm.com, gustavo@us.ibm.com, mahtabm@au1.ibm.com, sharrer@au.ibm.com ◊ Members of the IBM Epilepsy Consortium are listed in the Acknowledgements section J. Picone and I. Obeid are with Temple University, USA. T. Schaffter is with Sage Bionetworks, USA. E. Mehmet is with the University of Illinois at Urbana-Champaign, USA. All other authors are with IBM Research in USA, Israel and Australia. Introduction This decade has seen an ever-growing number of scientific fields benefitting from the advances in machine learning technology and tooling. More recently, this trend reached the medical domain, with applications reaching from cancer diagnosis [1] to the development of brain-machine-interfaces [2]. While Kaggle has pioneered the crowd-sourcing of machine learning challenges to incentivise data scientists from around the world to advance algorithm and model design, the increasing complexity of problem statements demands of participants to be expert data scientists, deeply knowledgeable in at least one other scientific domain, and competent software engineers with access to large compute resources. People who match this description are few and far between, unfortunately leading to a shrinking pool of possible participants and a loss of experts dedicating their time to solving important problems. Participation is even further restricted in the context of any challenge run on confidential use cases or with sensitive data. Recently, we designed and ran a deep learning challenge to crowd-source the development of an automated labelling system for brain recordings, aiming to advance epilepsy research. A focus of this challenge, run internally in IBM, was the development of a platform that lowers the barrier of entry and therefore mitigates the risk of excluding interested parties from participating. The challenge: enabling wide participation With the goal to run a challenge that mobilises the largest possible pool of participants from IBM (global), we designed a use case around previous work in epileptic seizure prediction [3]. In this “Deep Learning Epilepsy Detection Challenge”, participants were asked to develop an automatic labelling system to reduce the time a clinician would need to diagnose patients with epilepsy. Labelled training and blind validation data for the challenge were generously provided by Temple University Hospital (TUH) [4]. TUH also devised a novel scoring metric for the detection of seizures that was used as basis for algorithm evaluation [5]. In order to provide an experience with a low barrier of entry, we designed a generalisable challenge platform under the following principles: 1. No participant should need to have in-depth knowledge of the specific domain. (i.e. no participant should need to be a neuroscientist or epileptologist.) 2. No participant should need to be an expert data scientist. 3. No participant should need more than basic programming knowledge. (i.e. no participant should need to learn how to process fringe data formats and stream data efficiently.) 4. No participant should need to provide their own computing resources. In addition to the above, our platform should further • guide participants through the entire process from sign-up to model submission, • facilitate collaboration, and • provide instant feedback to the participants through data visualisation and intermediate online leaderboards. The platform The architecture of the platform that was designed and developed is shown in Figure 1. The entire system consists of a number of interacting components. (1) A web portal serves as the entry point to challenge participation, providing challenge information, such as timelines and challenge rules, and scientific background. The portal also facilitated the formation of teams and provided participants with an intermediate leaderboard of submitted results and a final leaderboard at the end of the challenge. (2) IBM Watson Studio [6] is the umbrella term for a number of services offered by IBM. Upon creation of a user account through the web portal, an IBM Watson Studio account was automatically created for each participant that allowed users access to IBM's Data Science Experience (DSX), the analytics engine Watson Machine Learning (WML), and IBM's Cloud Object Storage (COS) [7], all of which will be described in more detail in further sections. (3) The user interface and starter kit were hosted on IBM's Data Science Experience platform (DSX) and formed the main component for designing and testing models during the challenge. DSX allows for real-time collaboration on shared notebooks between team members. A starter kit in the form of a Python notebook, supporting the popular deep learning libraries TensorFLow [8] and PyTorch [9], was provided to all teams to guide them through the challenge process. Upon instantiation, the starter kit loaded necessary python libraries and custom functions for the invisible integration with COS and WML. In dedicated spots in the notebook, participants could write custom pre-processing code, machine learning models, and post-processing algorithms. The starter kit provided instant feedback about participants' custom routines through data visualisations. Using the notebook only, teams were able to run the code on WML, making use of a compute cluster of IBM's resources. The starter kit also enabled submission of the final code to a data storage to which only the challenge team had access. (4) Watson Machine Learning provided access to shared compute resources (GPUs). Code was bundled up automatically in the starter kit and deployed to and run on WML. WML in turn had access to shared storage from which it requested recorded data and to which it stored the participant's code and trained models. (5) IBM's Cloud Object Storage held the data for this challenge. Using the starter kit, participants could investigate their results as well as data samples in order to better design custom algorithms. (6) Utility Functions were loaded into the starter kit at instantiation. This set of functions included code to pre-process data into a more common format, to optimise streaming through the use of the NutsFlow and NutsML libraries [10], and to provide seamless access to the all IBM services used. Not captured in the diagram is the final code evaluation, which was conducted in an automated way as soon as code was submitted though the starter kit, minimising the burden on the challenge organising team. Figure 1: High-level architecture of the challenge platform Measuring success The competitive phase of the "Deep Learning Epilepsy Detection Challenge" ran for 6 months. Twenty-five teams, with a total number of 87 scientists and software engineers from 14 global locations participated. All participants made use of the starter kit we provided and ran algorithms on IBM's infrastructure WML. Seven teams persisted until the end of the challenge and submitted final solutions. The best performing solutions reached seizure detection performances which allow to reduce hundred-fold the time eliptologists need to annotate continuous EEG recordings. Thus, we expect the developed algorithms to aid in the diagnosis of epilepsy by significantly shortening manual labelling time. Detailed results are currently in preparation for publication. Equally important to solving the scientific challenge, however, was to understand whether we managed to encourage participation from non-expert data scientists. Figure 2: Primary occupation as reported by challenge participants Out of the 40 participants for whom we have occupational information, 23 reported Data Science or AI as their main job description, 11 reported being a Software Engineer, and 2 people had expertise in Neuroscience. Figure 2 shows that participants had a variety of specialisations, including some that are in no way related to data science, software engineering, or neuroscience. No participant had deep knowledge and experience in data science, software engineering and neuroscience. Conclusion Given the growing complexity of data science problems and increasing dataset sizes, in order to solve these problems, it is imperative to enable collaboration between people with differences in expertise with a focus on inclusiveness and having a low barrier of entry. We designed, implemented, and tested a challenge platform to address exactly this. Using our platform, we ran a deep-learning challenge for epileptic seizure detection. 87 IBM employees from several business units including but not limited to IBM Research with a variety of skills, including sales and design, participated in this highly technical challenge. 
    more » « less