Abstract The increasing availability and complexity of next-generation sequencing (NGS) data sets make ongoing training an essential component of conservation and population genetics research. A workshop entitled “ConGen 2018” was recently held to train researchers in conceptual and practical aspects of NGS data production and analysis for conservation and ecological applications. Sixteen instructors provided helpful lectures, discussions, and hands-on exercises regarding how to plan, produce, and analyze data for many important research questions. Lecture topics ranged from understanding probabilistic (e.g., Bayesian) genotype calling to the detection of local adaptation signatures from genomic, transcriptomic, and epigenomic data. We report on progressmore »
Big Data in Conservation Genomics: Boosting Skills, Hedging Bets, and Staying Current in the Field
Abstract A current challenge in the fields of evolutionary, ecological, and conservation genomics is balancing production of large-scale datasets with additional training often required to handle such datasets. Thus, there is an increasing need for conservation geneticists to continually learn and train to stay up-to-date through avenues such as symposia, meetings, and workshops. The ConGen meeting is a near-annual workshop that strives to guide participants in understanding population genetics principles, study design, data processing, analysis, interpretation, and applications to real-world conservation issues. Each year of ConGen gathers a diverse set of instructors, students, and resulting lectures, hands-on sessions, and discussions. Here, we summarize key lessons learned from the 2019 meeting and more recent updates to the field with a focus on big data in conservation genomics. First, we highlight classical and contemporary issues in study design that are especially relevant to working with big datasets, including the intricacies of data filtering. We next emphasize the importance of building analytical skills and simulating data, and how these skills have applications within and outside of conservation genetics careers. We also highlight recent technological advances and novel applications to conservation of wild populations. Finally, we provide data and recommendations to support ongoing efforts more »
- Editors:
- Koepfli, Klaus-Peter
- Award ID(s):
- 1639014
- Publication Date:
- NSF-PAR ID:
- 10301257
- Journal Name:
- Journal of Heredity
- Volume:
- 112
- Issue:
- 4
- Page Range or eLocation-ID:
- 313 to 327
- ISSN:
- 0022-1503
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This design-focused practice paper presents a case study describing how a training program developed for academic contexts was adapted for use with engineers working in industry. The underlying curriculum is from the NSF-funded CyberAmbassadors program, which developed training in communication, teamwork and leadership skills for participants from academic and research settings. For the case study described here, one module from the CyberAmbassadors project was adapted for engineers working in private industry: “Teaming Up: Effective Group and Meeting Management.” The key objectives were to increase knowledge and practical skills within the company’s engineering organization, focusing specifically on time management as itmore »
-
INTRODUCTION One of the central applications of the human reference genome has been to serve as a baseline for comparison in nearly all human genomic studies. Unfortunately, many difficult regions of the reference genome have remained unresolved for decades and are affected by collapsed duplications, missing sequences, and other issues. Relative to the current human reference genome, GRCh38, the Telomere-to-Telomere CHM13 (T2T-CHM13) genome closes all remaining gaps, adds nearly 200 million base pairs (Mbp) of sequence, corrects thousands of structural errors, and unlocks the most complex regions of the human genome for scientific inquiry. RATIONALE We demonstrate how the T2T-CHM13more »
-
Genomic datasets are growing dramatically as the cost of sequencing continues to decline and small sequencing devices become available. Enormous community databases store and share these data with the research community, but some of these genomic data analysis problems require large-scale computational platforms to meet both the memory and computational requirements. These applications differ from scientific simulations that dominate the workload on high-end parallel systems today and place different requirements on programming support, software libraries and parallel architectural design. For example, they involve irregular communication patterns such as asynchronous updates to shared data structures. We consider several problems in high-performancemore »
-
Obeid, I. (Ed.)The Neural Engineering Data Consortium (NEDC) is developing the Temple University Digital Pathology Corpus (TUDP), an open source database of high-resolution images from scanned pathology samples [1], as part of its National Science Foundation-funded Major Research Instrumentation grant titled “MRI: High Performance Digital Pathology Using Big Data and Machine Learning” [2]. The long-term goal of this project is to release one million images. We have currently scanned over 100,000 images and are in the process of annotating breast tissue data for our first official corpus release, v1.0.0. This release contains 3,505 annotated images of breast tissue including 74 patients withmore »