skip to main content


Title: FaceBase 3: analytical tools and FAIR resources for craniofacial and dental research
ABSTRACT The FaceBase Consortium was established by the National Institute of Dental and Craniofacial Research in 2009 as a ‘big data’ resource for the craniofacial research community. Over the past decade, researchers have deposited hundreds of annotated and curated datasets on both normal and disordered craniofacial development in FaceBase, all freely available to the research community on the FaceBase Hub website. The Hub has developed numerous visualization and analysis tools designed to promote integration of multidisciplinary data while remaining dedicated to the FAIR principles of data management (findability, accessibility, interoperability and reusability) and providing a faceted search infrastructure for locating desired data efficiently. Summaries of the datasets generated by the FaceBase projects from 2014 to 2019 are provided here. FaceBase 3 now welcomes contributions of data on craniofacial and dental development in humans, model organisms and cell lines. Collectively, the FaceBase Consortium, along with other NIH-supported data resources, provide a continuously growing, dynamic and current resource for the scientific community while improving data reproducibility and fulfilling data sharing requirements.  more » « less
Award ID(s):
1711847
NSF-PAR ID:
10300497
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more » ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; « less
Date Published:
Journal Name:
Development
Volume:
147
Issue:
18
ISSN:
0950-1991
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Between 2018 and 2021 PIs for National Science Foundation Awards # 1758781 and 1758814 EAGER: Collaborative Research: Developing and Testing an Incubator for Digital Entrepreneurship in Remote Communities, in partnership with the Tanana Chiefs Conference, the traditional tribal consortium of the 42 villages of Interior Alaska, jointly developed and conducted large-scale digital and in-person surveys of multiple Alaskan interior communities. The survey was distributed via a combination of in-person paper surveys, digital surveys, social media links, verbal in-person interviews and telephone-based responses. Analysis of this measure using SAS demonstrated the statistically significant need for enhanced digital infrastructure and reworked digital entrepreneurial and technological education in the Tanana Chiefs Conference region. 1. Two statistical measures were created during this research: Entrepreneurial Readiness (ER) and Digital Technology needs and skills (DT), both of which showed high measures of internal consistency (.89, .81). 2. The measures revealed entrepreneurial readiness challenges and evidence of specific addressable barriers that are currently preventing (serving as hindrances) to regional digital economic activity. The survey data showed statistically significant correlation with the mixed-methodological in-person focus groups and interview research conducted by the PIs and TCC collaborators in Hughes and Huslia, AK, which further corroborated stated barriers to entrepreneurship development in the region. 3. Data generated by the survey and fieldwork is maintained by the Tanana Chiefs Conference under data sovereignty agreements. The survey and focus group data contains aggregated statistical/empirical data as well as qualitative/subjective detail that runs the risk of becoming personally identifiable especially due to (but not limited to) to concerns with exceedingly small Arctic community population sizes. 4. This metadata is being provided in order to serve as a record of the data collection and analysis conducted, and also to share some high-level findings that, while revealing no personal information, may be helpful for policymaking, regional planning and efforts towards educational curricular development and infrastructural investment. The sample demographics consist of 272 women, 79 men, and 4 with gender not indicated as a response. Barriers to Entrepreneurial Readiness were a component of the measure. Lack of education is the #1 barrier, followed closely by lack of access to childcare. Among women who participated in the survey measure, 30% with 2 or more children report lack of childcare to be a significant barrier to entrepreneurial and small business activity. For entrepreneurial readiness and digital economy, the scales perform well from a psychometric standpoint. The summary scores are roughly normally distributed. Cronbach’s alphas are greater than 0.80 for both. They are moderately correlated with each other (r = 0.48, p < .0001). Men and women do not differ significantly on either measure. Education is significantly related to the digital economy measure. The detail provided in the survey related to educational needs enabled optimized development of the Incubator for Digital Entrepreneurship in Remote Communities. Enhanced digital entrepreneurship training with clear cultural linkages to traditions and community needs, along with additional childcare opportunities are two among several specific recommendations provided to the TCC. The project PIs are working closely with the TCC administration and community members related to elements of culturally-aligned curricular development that respects data tribal sovereignty, local data management protocols, data anonymity and adherence to human subjects (IRB) protocols. While the survey data is currently embargoed and unable to be submitted publicly for reasons of anonymity, the project PIs are working with the NSF Arctic Data Center towards determining pathways for sharing personally-protected data with the larger scientific community. These approaches may consist of aggregating and digitally anonymizing sensitive data in ways that cannot be de-aggregated and that meet agency and scientific community needs (while also fully respecting and protecting participants’ rights and personal privacy). At present the data sensitivity protocols are not yet adapted to TCC requirements and the datasets will remain in their care. 
    more » « less
  2. Abstract Background

    Cleft palate is one of the most prevalent birth defects. Mice are useful for studying palate development because of their morphological and genetic similarities to humans. In mice, palate development occurs between embryonic days (E)11.5 to 15.5. Single cell transcriptional profiles of palate cell populations have been a valuable resource for the craniofacial research community, but we lack a single cell transcriptional profile for anterior palate at E13.5, at the transition from proliferation to shelf elevation.

    Results

    A detailed single cell RNA sequencing analysis reveals heterogeneity in expression profiles of the cell populations of the E13.5 anterior palate. Hybridization chain reaction RNA fluorescent in situ hybridization (HCR RNA FISH) reveals epithelial populations segregate into layers. Mesenchymal populations spatially segregate into four domains. One of these mesenchymal populations expresses ligands and receptors distinct from the rest of the mesenchyme, suggesting that these cells have a unique function. RNA velocity analysis shows two terminal cell states that contribute to either the proximal or distal palatal regions emerge from a single progenitor pool.

    Conclusion

    This single cell resolution expression data and detailed analysis from E13.5 anterior palate provides a powerful resource for mechanistic insight into secondary palate morphogenesis for the craniofacial research community.

     
    more » « less
  3. Abstract

    Large-scale genotype and phenotype data have been increasingly generated to identify genetic markers, understand gene function and evolution and facilitate genomic selection. These datasets hold immense value for both current and future studies, as they are vital for crop breeding, yield improvement and overall agricultural sustainability. However, integrating these datasets from heterogeneous sources presents significant challenges and hinders their effective utilization. We established the Genotype-Phenotype Working Group in November 2021 as a part of the AgBioData Consortium (https://www.agbiodata.org) to review current data types and resources that support archiving, analysis and visualization of genotype and phenotype data to understand the needs and challenges of the plant genomic research community. For 2021–22, we identified different types of datasets and examined metadata annotations related to experimental design/methods/sample collection, etc. Furthermore, we thoroughly reviewed publicly funded repositories for raw and processed data as well as secondary databases and knowledgebases that enable the integration of heterogeneous data in the context of the genome browser, pathway networks and tissue-specific gene expression. Based on our survey, we recommend a need for (i) additional infrastructural support for archiving many new data types, (ii) development of community standards for data annotation and formatting, (iii) resources for biocuration and (iv) analysis and visualization tools to connect genotype data with phenotype data to enhance knowledge synthesis and to foster translational research. Although this paper only covers the data and resources relevant to the plant research community, we expect that similar issues and needs are shared by researchers working on animals.

    Database URL: https://www.agbiodata.org.

     
    more » « less
  4. Obeid, Iyad ; Picone, Joseph ; Selesnick, Ivan (Ed.)
    The Neural Engineering Data Consortium (NEDC) is developing a large open source database of high-resolution digital pathology images known as the Temple University Digital Pathology Corpus (TUDP) [1]. Our long-term goal is to release one million images. We expect to release the first 100,000 image corpus by December 2020. The data is being acquired at the Department of Pathology at Temple University Hospital (TUH) using a Leica Biosystems Aperio AT2 scanner [2] and consists entirely of clinical pathology images. More information about the data and the project can be found in Shawki et al. [3]. We currently have a National Science Foundation (NSF) planning grant [4] to explore how best the community can leverage this resource. One goal of this poster presentation is to stimulate community-wide discussions about this project and determine how this valuable resource can best meet the needs of the public. The computing infrastructure required to support this database is extensive [5] and includes two HIPAA-secure computer networks, dual petabyte file servers, and Aperio’s eSlide Manager (eSM) software [6]. We currently have digitized over 50,000 slides from 2,846 patients and 2,942 clinical cases. There is an average of 12.4 slides per patient and 10.5 slides per case with one report per case. The data is organized by tissue type as shown below: Filenames: tudp/v1.0.0/svs/gastro/000001/00123456/2015_03_05/0s15_12345/0s15_12345_0a001_00123456_lvl0001_s000.svs tudp/v1.0.0/svs/gastro/000001/00123456/2015_03_05/0s15_12345/0s15_12345_00123456.docx Explanation: tudp: root directory of the corpus v1.0.0: version number of the release svs: the image data type gastro: the type of tissue 000001: six-digit sequence number used to control directory complexity 00123456: 8-digit patient MRN 2015_03_05: the date the specimen was captured 0s15_12345: the clinical case name 0s15_12345_0a001_00123456_lvl0001_s000.svs: the actual image filename consisting of a repeat of the case name, a site code (e.g., 0a001), the type and depth of the cut (e.g., lvl0001) and a token number (e.g., s000) 0s15_12345_00123456.docx: the filename for the corresponding case report We currently recognize fifteen tissue types in the first installment of the corpus. The raw image data is stored in Aperio’s “.svs” format, which is a multi-layered compressed JPEG format [3,7]. Pathology reports containing a summary of how a pathologist interpreted the slide are also provided in a flat text file format. A more complete summary of the demographics of this pilot corpus will be presented at the conference. Another goal of this poster presentation is to share our experiences with the larger community since many of these details have not been adequately documented in scientific publications. There are quite a few obstacles in collecting this data that have slowed down the process and need to be discussed publicly. Our backlog of slides dates back to 1997, meaning there are a lot that need to be sifted through and discarded for peeling or cracking. Additionally, during scanning a slide can get stuck, stalling a scan session for hours, resulting in a significant loss of productivity. Over the past two years, we have accumulated significant experience with how to scan a diverse inventory of slides using the Aperio AT2 high-volume scanner. We have been working closely with the vendor to resolve many problems associated with the use of this scanner for research purposes. This scanning project began in January of 2018 when the scanner was first installed. The scanning process was slow at first since there was a learning curve with how the scanner worked and how to obtain samples from the hospital. From its start date until May of 2019 ~20,000 slides we scanned. In the past 6 months from May to November we have tripled that number and how hold ~60,000 slides in our database. This dramatic increase in productivity was due to additional undergraduate staff members and an emphasis on efficient workflow. The Aperio AT2 scans 400 slides a day, requiring at least eight hours of scan time. The efficiency of these scans can vary greatly. When our team first started, approximately 5% of slides failed the scanning process due to focal point errors. We have been able to reduce that to 1% through a variety of means: (1) best practices regarding daily and monthly recalibrations, (2) tweaking the software such as the tissue finder parameter settings, and (3) experience with how to clean and prep slides so they scan properly. Nevertheless, this is not a completely automated process, making it very difficult to reach our production targets. With a staff of three undergraduate workers spending a total of 30 hours per week, we find it difficult to scan more than 2,000 slides per week using a single scanner (400 slides per night x 5 nights per week). The main limitation in achieving this level of production is the lack of a completely automated scanning process, it takes a couple of hours to sort, clean and load slides. We have streamlined all other aspects of the workflow required to database the scanned slides so that there are no additional bottlenecks. To bridge the gap between hospital operations and research, we are using Aperio’s eSM software. Our goal is to provide pathologists access to high quality digital images of their patients’ slides. eSM is a secure website that holds the images with their metadata labels, patient report, and path to where the image is located on our file server. Although eSM includes significant infrastructure to import slides into the database using barcodes, TUH does not currently support barcode use. Therefore, we manage the data using a mixture of Python scripts and manual import functions available in eSM. The database and associated tools are based on proprietary formats developed by Aperio, making this another important point of community-wide discussion on how best to disseminate such information. Our near-term goal for the TUDP Corpus is to release 100,000 slides by December 2020. We hope to continue data collection over the next decade until we reach one million slides. We are creating two pilot corpora using the first 50,000 slides we have collected. The first corpus consists of 500 slides with a marker stain and another 500 without it. This set was designed to let people debug their basic deep learning processing flow on these high-resolution images. We discuss our preliminary experiments on this corpus and the challenges in processing these high-resolution images using deep learning in [3]. We are able to achieve a mean sensitivity of 99.0% for slides with pen marks, and 98.9% for slides without marks, using a multistage deep learning algorithm. While this dataset was very useful in initial debugging, we are in the midst of creating a new, more challenging pilot corpus using actual tissue samples annotated by experts. The task will be to detect ductal carcinoma (DCIS) or invasive breast cancer tissue. There will be approximately 1,000 images per class in this corpus. Based on the number of features annotated, we can train on a two class problem of DCIS or benign, or increase the difficulty by increasing the classes to include DCIS, benign, stroma, pink tissue, non-neoplastic etc. Those interested in the corpus or in participating in community-wide discussions should join our listserv, nedc_tuh_dpath@googlegroups.com, to be kept informed of the latest developments in this project. You can learn more from our project website: https://www.isip.piconepress.com/projects/nsf_dpath. 
    more » « less
  5. Abstract Objectives

    Ancient human dental calculus is a unique, nonrenewable biological resource encapsulating key information about the diets, lifestyles, and health conditions of past individuals and populations. With compounding calls its destructive analysis, it is imperative to refine the ways in which the scientific community documents, samples, and analyzes dental calculus so as to maximize its utility to the public and scientific community.

    Materials and Methods

    Our research team conducted an IRB‐approved survey of dental calculus researchers with diverse academic backgrounds, research foci, and analytical specializations.

    Results

    This survey reveals variation in how metadata is collected and utilized across different subdisciplines and highlights how these differences have profound implications for dental calculus research. Moreover, the survey suggests the need for more communication between those who excavate, curate, and analyze biomolecular data from dental calculus.

    Discussion

    Challenges in cross‐disciplinary communication limit researchers' ability to effectively utilize samples in rigorous and reproducible ways. Specifically, the lack of standardized skeletal and dental metadata recording and contamination avoidance procedures hinder downstream anthropological applications, as well as the pursuit of broader paleodemographic and paleoepidemiological inquiries that rely on more complete information about the individuals sampled. To provide a path forward toward more ethical and standardized dental calculus sampling and documentation approaches, we review the current methods by which skeletal and dental metadata are recorded. We also describe trends in sampling and contamination‐control approaches. Finally, we use that information to suggest new guidelines for ancient dental calculus documentation and sampling strategies that will improve research practices in the future.

     
    more » « less