skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
Attention:The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 7:00 AM ET to 7:30 AM ET on Friday, April 24 due to maintenance. We apologize for the inconvenience.


Title: PyReconstruct: A fully opensource, collaborative successor to Reconstruct
Abstract As the serial section community transitions to volume electron microscopy, tools are needed to balance rapid segmentation efforts with documenting the fine detail of structures that support cell function. New annotation applications should be accessible to users and meet the needs of the neuroscience and connectomics communities while also being useful across other disciplines. Issues not currently addressed by a single, modern annotation application include: 1) built-in curation systems with utilities for expert intervention to provide quality assurance, 2) integrated alignment features that allow for image registration on-the-fly as image flaws are discovered during annotation, 3) simplicity for non-specialists within and beyond the neuroscience community, 5) a system to store experimental meta-data with annotation data in a way that researchers remain masked regarding condition to avoid potential biases, 6) local management of large datasets, 7) fully open-source codebase allowing development of new tools, and more. Here, we present PyReconstruct, a modern successor to the Reconstruct annotation tool. PyReconstruct operates in a field-agnostic manner, runs on all major operating systems, breaks through legacy RAM limitations, features an intuitive and collaborative curation system, and employs a flexible and dynamic approach to image registration. It can be used to analyze, display, and publish experimental or connectomics data. PyReconstruct is suited for generating ground truth to implement in automated segmentation, outcomes of which can be returned to PyReconstruct for proofreading and quality control. Significance statementIn neuroscience, the emerging field of connectomics has produced annotation tools for reconstruction that prioritize circuit connectivity across microns to centimeters and farther. Determining the strength of synapses forming the connections is crucial to understand function and requires quantification of their nanoscale dimensions and subcellular composition. PyReconstruct, successor to the early annotation tool Reconstruct, meets these requirements for synapses and other structures well beyond neuroscience. PyReconstruct lifts many restrictions of legacy Reconstruct and offers a user-friendly interface, integrated curation, dynamic alignment, nanoscale quantification, 3D visualization, and more. Extensive compatibility with third-party software provides access to the expanding tools from the connectomics and imaging communities.  more » « less
Award ID(s):
2014862
PAR ID:
10623763
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
bioRxiv
Date Published:
Format(s):
Medium: X
Institution:
bioRxiv
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Advances in Electron Microscopy, image segmentation and computational infrastructure have given rise to large-scale and richly annotated connectomic datasets which are increasingly shared across communities. To enable collaboration, users need to be able to concurrently create new annotations and correct errors in the automated segmentation by proofreading. In large datasets, every proofreading edit relabels cell identities of millions of voxels and thousands of annotations like synapses. For analysis, users require immediate and reproducible access to this constantly changing and expanding data landscape. Here, we present the Connectome Annotation Versioning Engine (CAVE), a computational infrastructure for immediate and reproducible connectome analysis in up-to petascale datasets (∼1mm3) while proofreading and annotating is ongoing. For segmentation, CAVE provides a distributed proofreading infrastructure for continuous versioning of large reconstructions. Annotations in CAVE are defined by locations such that they can be quickly assigned to the underlying segment which enables fast analysis queries of CAVE’s data for arbitrary time points. CAVE supports schematized, extensible annotations, so that researchers can readily design novel annotation types. CAVE is already used for many connectomics datasets, including the largest datasets available to date. 
    more » « less
  2. The size of image volumes in connectomics studies now reaches terabyte and often petabyte scales with a great diversity of appearance due to different sample preparation procedures. However, manual annotation of neuronal structures (e.g., synapses) in these huge image volumes is time-consuming, leading to limited labeled training data often smaller than 0.001% of the large-scale image volumes in application. Methods that can utilize in-domain labeled data and generalize to out-of-domain unlabeled data are in urgent need. Although many domain adaptation approaches are proposed to address such issues in the natural image domain, few of them have been evaluated on connectomics data due to a lack of domain adaptation benchmarks. Therefore, to enable developments of domain adaptive synapse detection methods for large-scale connectomics applications, we annotated 14 image volumes from a biologically diverse set of Megaphragma viggianii brain regions originating from three different whole-brain datasets and organized the WASPSYN challenge at ISBI 2023. The annotations include coordinates of pre-synapses and post-synapses in the 3D space, together with their one-to-many connectivity information. This paper describes the dataset, the tasks, the proposed baseline, the evaluation method, and the results of the challenge. Limitations of the challenge and the impact on neuroscience research are also discussed. The challenge is and will continue to be available at https://codalab.lisn.upsaclay.fr/competitions/9169. Successful algorithms that emerge from our challenge may potentially revolutionize real-world connectomics research and further the cause that aims to unravel the complexity of brain structure and function. 
    more » « less
  3. Abstract Advances in electron microscopy, image segmentation and computational infrastructure have given rise to large-scale and richly annotated connectomic datasets, which are increasingly shared across communities. To enable collaboration, users need to be able to concurrently create annotations and correct errors in the automated segmentation by proofreading. In large datasets, every proofreading edit relabels cell identities of millions of voxels and thousands of annotations like synapses. For analysis, users require immediate and reproducible access to this changing and expanding data landscape. Here we present the Connectome Annotation Versioning Engine (CAVE), a computational infrastructure that provides scalable solutions for proofreading and flexible annotation support for fast analysis queries at arbitrary time points. Deployed as a suite of web services, CAVE empowers distributed communities to perform reproducible connectome analysis in up to petascale datasets (~1 mm3) while proofreading and annotating is ongoing. 
    more » « less
  4. Abstract The heterogeneity of brain imaging methods in neuroscience provides rich data that cannot be captured by a single technique, and our interpretations benefit from approaches that enable easy comparison both within and across different data types. For example, comparing brain-wide neural dynamics across experiments and aligning such data to anatomical resources, such as gene expression patterns or connectomes, requires precise alignment to a common set of anatomical coordinates. However, this is challenging because registeringin vivofunctional imaging data toex vivoreference atlases requires accommodating differences in imaging modality, microscope specification, and sample preparation. We overcome these challenges inDrosophilaby building anin vivoreference atlas from multiphoton-imaged brains, called the Functional Drosophila Atlas (FDA). We then develop a two-step pipeline, BrIdge For Registering Over Statistical Templates (BIFROST), for transforming neural imaging data into this common space and for importingex vivoresources such as connectomes. Using genetically labeled cell types as ground truth, we demonstrate registration with a precision of less than 10 microns. Overall, BIFROST provides a pipeline for registering functional imaging datasets in the fly, both within and across experiments. SignificanceLarge-scale functional imaging experiments inDrosophilahave given us new insights into neural activity in various sensory and behavioral contexts. However, precisely registering volumetric images from different studies has proven challenging, limiting quantitative comparisons of data across experiments. Here, we address this limitation by developing BIFROST, a registration pipeline robust to differences across experimental setups and datasets. We benchmark this pipeline by genetically labeling cell types in the fly brain and demonstrate sub-10 micron registration precision, both across specimens and across laboratories. We further demonstrate accurate registration betweenin-vivobrain volumes and ultrastructural connectomes, enabling direct structure-function comparisons in future experiments. 
    more » « less
  5. Abstract Computational fluid dynamics (CFD) modeling of left ventricle (LV) flow combined with patient medical imaging data has shown great potential in obtaining patient-specific hemodynamics information for functional assessment of the heart. A typical model construction pipeline usually starts with segmentation of the LV by manual delineation followed by mesh generation and registration techniques using separate software tools. However, such approaches usually require significant time and human efforts in the model generation process, limiting large-scale analysis. In this study, we propose an approach toward fully automating the model generation process for CFD simulation of LV flow to significantly reduce LV CFD model generation time. Our modeling framework leverages a novel combination of techniques including deep-learning based segmentation, geometry processing, and image registration to reliably reconstruct CFD-suitable LV models with little-to-no user intervention.1 We utilized an ensemble of two-dimensional (2D) convolutional neural networks (CNNs) for automatic segmentation of cardiac structures from three-dimensional (3D) patient images and our segmentation approach outperformed recent state-of-the-art segmentation techniques when evaluated on benchmark data containing both magnetic resonance (MR) and computed tomography(CT) cardiac scans. We demonstrate that through a combination of segmentation and geometry processing, we were able to robustly create CFD-suitable LV meshes from segmentations for 78 out of 80 test cases. Although the focus on this study is on image-to-mesh generation, we demonstrate the feasibility of this framework in supporting LV hemodynamics modeling by performing CFD simulations from two representative time-resolved patient-specific image datasets. 
    more » « less