skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Deep learning enabled multi-organ segmentation of mouse embryos
ABSTRACT The International Mouse Phenotyping Consortium (IMPC) has generated a large repository of three-dimensional (3D) imaging data from mouse embryos, providing a rich resource for investigating phenotype/genotype interactions. While the data is freely available, the computing resources and human effort required to segment these images for analysis of individual structures can create a significant hurdle for research. In this paper, we present an open source, deep learning-enabled tool, Mouse Embryo Multi-Organ Segmentation (MEMOS), that estimates a segmentation of 50 anatomical structures with a support for manually reviewing, editing, and analyzing the estimated segmentation in a single application. MEMOS is implemented as an extension on the 3D Slicer platform and is designed to be accessible to researchers without coding experience. We validate the performance of MEMOS-generated segmentations through comparison to state-of-the-art atlas-based segmentation and quantification of previously reported anatomical abnormalities in a Cbx4 knockout strain. This article has an associated First Person interview with the first author of the paper.  more » « less
Award ID(s):
2118240
PAR ID:
10404260
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Biology Open
Volume:
12
Issue:
2
ISSN:
2046-6390
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The decreasing cost of acquiring computed tomographic (CT) data has fueled a global effort to digitize the anatomy of museum specimens. This effort has produced a wealth of open access digital 3D models of anatomy available to anyone with access to the internet. The potential applications of these data are broad, ranging from 3D printing for purely educational purposes to the development of highly advanced biomechanical models of anatomical structures. However, while virtually anyone can access these digital data, relatively few have the training to easily derive a desirable product (e.g., a 3D visualization of an anatomical structure) from them. Here, we present a workflow based on free, open source, cross-platform software for processing CT data. We provide step-by-step instructions that start with acquiring CT data from a new reconstruction or an open access repository, and progress through visualizing, measuring, landmarking, and constructing digital 3D models of anatomical structures. We also include instructions for digital dissection, data reduction, and exporting data for use in downstream applications such as 3D printing. Finally, we provide supplementary videos and workflows that demonstrate how the workflow facilitates five specific applications: measuring functional traits associated with feeding, digitally isolating anatomical structures, isolating regions of interest using semi-automated segmentation, collecting data with simple visual tools, and reducing file size and converting file type of a 3D model. 
    more » « less
  2. The burgeoning field of brain health research increasingly leverages artificial intelligence (AI) to analyze and interpret neuroimaging data. Medical foundation models have shown promise of superior performance with better sample efficiency. This work introduces a novel approach towards creating 3-dimensional (3D) medical foundation models for multimodal neuroimage segmentation through self-supervised training. Our approach involves a novel two-stage pretraining approach using vision transformers. The first stage encodes anatomical structures in generally healthy brains from the large-scale unlabeled neuroimage dataset of multimodal brain magnetic resonance imaging (MRI) images from 41,400 participants. This stage of pertaining focuses on identifying key features such as shapes and sizes of different brain structures. The second pretraining stage identifies disease-specific attributes, such as geometric shapes of tumors and lesions and spatial placements within the brain. This dual-phase methodology significantly reduces the extensive data requirements usually necessary for AI model training in neuroimage segmentation with the flexibility to adapt to various imaging modalities. We rigorously evaluate our model, BrainSegFounder, using the Brain Tumor Segmentation (BraTS) challenge and Anatomical Tracings of Lesions After Stroke v2.0 (ATLAS v2.0) datasets. BrainSegFounder demonstrates a significant performance gain, surpassing the achievements of the previous winning solutions using fully supervised learning. Our findings underscore the impact of scaling up both the model complexity and the volume of unlabeled training data derived from generally healthy brains. Both of these factors enhance the accuracy and predictive capabilities of the model in neuroimage segmentation tasks. Our pretrained models and code are at https://github.com/lab-smile/BrainSegFounder. 
    more » « less
  3. Abstract Modern computational and imaging methods are revolutionizing the fields of comparative morphology, biomechanics, and ecomorphology. In particular, imaging tools such as X-ray micro computed tomography (µCT) and diffusible iodine-based contrast enhanced CT allow observing and measuring small and/or otherwise inaccessible anatomical structures, and creating highly accurate three-dimensional (3D) renditions that can be used in biomechanical modeling and tests of functional or evolutionary hypotheses. But, do the larger datasets generated through 3D digitization always confer greater power to uncover functional or evolutionary patterns, when compared with more traditional methodologies? And, if so, why? Here, we contrast the advantages and challenges of using data generated via (3D) CT methods versus more traditional (2D) approaches in the study of skull macroevolution and feeding functional morphology in bats. First, we test for the effect of dimensionality and landmark number on inferences of adaptive shifts during cranial evolution by contrasting results from 3D versus 2D geometric morphometric datasets of bat crania. We find sharp differences between results generated from the 3D versus some of the 2D datasets (xy, yz, ventral, and frontal), which appear to be primarily driven by the loss of critical dimensions of morphological variation rather than number of landmarks. Second, we examine differences in accuracy and precision among 2D and 3D predictive models of bite force by comparing three skull lever models that differ in the sources of skull and muscle anatomical data. We find that a 3D model that relies on skull µCT scans and muscle data partly derived from diceCT is slightly more accurate than models based on skull photographs or skull µCT and muscle data fully derived from dissections. However, the benefit of using the diceCT-informed model is modest given the effort it currently takes to virtually dissect muscles from CT scans. By contrasting traditional and modern tools, we illustrate when and why 3D datasets may be preferable over 2D data, and vice versa, and how different methodologies can complement each other in comparative analyses of morphological function and evolution. 
    more » « less
  4. null (Ed.)
    Cryo-electron Tomography (cryo-ET) generates 3D visualization of cellular organization that allows biologists to analyze cellular structures in a near-native state with nano resolution. Recently, deep learning methods have demonstrated promising performance in classification and segmentation of macromolecule structures captured by cryo-ET, but training individual deep learning models requires large amounts of manually labeled and segmented data from previously observed classes. To perform classification and segmentation in the wild (i.e., with limited training data and with unseen classes), novel deep learning model needs to be developed to classify and segment unseen macromolecules captured by cryo-ET. In this paper, we develop a one-shot learning framework, called cryo-ET one-shot network (COS-Net), for simultaneous classification of macromolecular structure and generation of the voxel-level 3D segmentation, using only one training sample per class. Our experimental results on 22 macromolecule classes demonstrated that our COS-Net could efficiently classify macromolecular structures with small amounts of samples and produce accurate 3D segmentation at the same time. 
    more » « less
  5. Abstract As computed tomography and related technologies have become mainstream tools across a broad range of scientific applications, each new generation of instrumentation produces larger volumes of more-complex 3D data. Lagging behind are step-wise improvements in computational methods to rapidly analyze these new large, complex datasets. Here we describe novel computational methods to capture and quantify volumetric information, and to efficiently characterize and compare shape volumes. It is based on innovative theoretical and computational reformulation of volumetric computing. It consists of two theoretical constructs and their numerical implementation: the spherical wave decomposition ( SWD ), that provides fast, accurate automated characterization of shapes embedded within complex 3D datasets; and symplectomorphic registration with phase space regularization by entropy spectrum pathways ( SYMREG ), that is a non-linear volumetric registration method that allows homologous structures to be correctly warped to each other or a common template for comparison. Together, these constitute the Shape Analysis for Phenomics from Imaging Data ( SAPID ) method. We demonstrate its ability to automatically provide rapid quantitative segmentation and characterization of single unique datasets, and both inter-and intra-specific comparative analyses. We go beyond pairwise comparisons and analyze collections of samples from 3D data repositories, highlighting the magnified potential our method has when applied to data collections. We discuss the potential of SAPID in the broader context of generating normative morphologies required for meaningfully quantifying and comparing variations in complex 3D anatomical structures and systems. 
    more » « less