skip to main content


Title: Learning Cortical Parcellations Using Graph Neural Networks
Deep learning has been applied to magnetic resonance imaging (MRI) for a variety of purposes, ranging from the acceleration of image acquisition and image denoising to tissue segmentation and disease diagnosis. Convolutional neural networks have been particularly useful for analyzing MRI data due to the regularly sampled spatial and temporal nature of the data. However, advances in the field of brain imaging have led to network- and surface-based analyses that are often better represented in the graph domain. In this analysis, we propose a general purpose cortical segmentation method that, given resting-state connectivity features readily computed during conventional MRI pre-processing and a set of corresponding training labels, can generate cortical parcellations for new MRI data. We applied recent advances in the field of graph neural networks to the problem of cortical surface segmentation, using resting-state connectivity to learn discrete maps of the human neocortex. We found that graph neural networks accurately learn low-dimensional representations of functional brain connectivity that can be naturally extended to map the cortices of new datasets. After optimizing over algorithm type, network architecture, and training features, our approach yielded mean classification accuracies of 79.91% relative to a previously published parcellation. We describe how some hyperparameter choices including training and testing data duration, network architecture, and algorithm choice affect model performance.  more » « less
Award ID(s):
1734430
NSF-PAR ID:
10397889
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Frontiers in Neuroscience
Volume:
15
ISSN:
1662-453X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This work presents a novel deep learning architecture called BNU-Net for the purpose of cardiac segmentation based on short-axis MRI images. Its name is derived from the Batch Normalized (BN) U-Net architecture for medical image segmentation. New generations of deep neural networks (NN) are called convolutional NN (CNN). CNNs like U-Net have been widely used for image classification tasks. CNNs are supervised training models which are trained to learn hierarchies of features automatically and robustly perform classification. Our architecture consists of an encoding path for feature extraction and a decoding path that enables precise localization. We compare this approach with a parallel approach named U-Net. Both BNU-Net and U-Net are cardiac segmentation approaches: while BNU-Net employs batch normalization to the results of each convolutional layer and applies an exponential linear unit (ELU) approach that operates as activation function, U-Net does not apply batch normalization and is based on Rectified Linear Units (ReLU). The presented work (i) facilitates various image preprocessing techniques, which includes affine transformations and elastic deformations, and (ii) segments the preprocessed images using the new deep learning architecture. We evaluate our approach on a dataset containing 805 MRI images from 45 patients. The experimental results reveal that our approach accomplishes comparable or better performance than other state-of-the-art approaches in terms of the Dice coefficient and the average perpendicular distance. Index Terms—Magnetic Resonance Imaging; Batch Normalization; Exponential Linear Units 
    more » « less
  2. Iron overload, a complication of repeated blood transfusions, can cause tissue damage and organ failure. The body has no regulatory mechanism to excrete excess iron, so iron overload must be closely monitored to guide therapy and measure treatment response. The concentration of iron in the liver is a reliable marker for total body iron content and is now measured noninvasively with magnetic resonance imaging (MRI). MRI produces a diagnostic image by measuring the signals emitted from the body in the presence of a constant magnetic field and radiofrequency pulses. At each pixel, the signal decay constant, T2*, can be calculated, providing insight about the structure of each tissue. Liver iron content can be quantified based on this T2* value because signal decay accelerates with increasing iron concentration. We developed a method to automatically segment the liver from the MRI image to accurately calculate iron content. Our current algorithm utilizes the active contour model for image segmentation, which iteratively evolves a curve until it reaches an edge or a boundary. We applied this algorithm to each MRI image in addition to a map of pixelwise T2* values, combining basic image processing with imaging physics. One of the limitations of this segmentation model is how it handles noise in the MRI data. Recent advancements in deep learning have enabled researchers to utilize convolutional neural networks to denoise and reconstruct images. We used the Trainable Nonlinear Reaction Diffusion network architecture to denoise the MRI images, allowing for smoother segmentation while preserving fine details. 
    more » « less
  3. INTRODUCTION A brainwide, synaptic-resolution connectivity map—a connectome—is essential for understanding how the brain generates behavior. However because of technological constraints imaging entire brains with electron microscopy (EM) and reconstructing circuits from such datasets has been challenging. To date, complete connectomes have been mapped for only three organisms, each with several hundred brain neurons: the nematode C. elegans , the larva of the sea squirt Ciona intestinalis , and of the marine annelid Platynereis dumerilii . Synapse-resolution circuit diagrams of larger brains, such as insects, fish, and mammals, have been approached by considering select subregions in isolation. However, neural computations span spatially dispersed but interconnected brain regions, and understanding any one computation requires the complete brain connectome with all its inputs and outputs. RATIONALE We therefore generated a connectome of an entire brain of a small insect, the larva of the fruit fly, Drosophila melanogaster. This animal displays a rich behavioral repertoire, including learning, value computation, and action selection, and shares homologous brain structures with adult Drosophila and larger insects. Powerful genetic tools are available for selective manipulation or recording of individual neuron types. In this tractable model system, hypotheses about the functional roles of specific neurons and circuit motifs revealed by the connectome can therefore be readily tested. RESULTS The complete synaptic-resolution connectome of the Drosophila larval brain comprises 3016 neurons and 548,000 synapses. We performed a detailed analysis of the brain circuit architecture, including connection and neuron types, network hubs, and circuit motifs. Most of the brain’s in-out hubs (73%) were postsynaptic to the learning center or presynaptic to the dopaminergic neurons that drive learning. We used graph spectral embedding to hierarchically cluster neurons based on synaptic connectivity into 93 neuron types, which were internally consistent based on other features, such as morphology and function. We developed an algorithm to track brainwide signal propagation across polysynaptic pathways and analyzed feedforward (from sensory to output) and feedback pathways, multisensory integration, and cross-hemisphere interactions. We found extensive multisensory integration throughout the brain and multiple interconnected pathways of varying depths from sensory neurons to output neurons forming a distributed processing network. The brain had a highly recurrent architecture, with 41% of neurons receiving long-range recurrent input. However, recurrence was not evenly distributed and was especially high in areas implicated in learning and action selection. Dopaminergic neurons that drive learning are amongst the most recurrent neurons in the brain. Many contralateral neurons, which projected across brain hemispheres, were in-out hubs and synapsed onto each other, facilitating extensive interhemispheric communication. We also analyzed interactions between the brain and nerve cord. We found that descending neurons targeted a small fraction of premotor elements that could play important roles in switching between locomotor states. A subset of descending neurons targeted low-order post-sensory interneurons likely modulating sensory processing. CONCLUSION The complete brain connectome of the Drosophila larva will be a lasting reference study, providing a basis for a multitude of theoretical and experimental studies of brain function. The approach and computational tools generated in this study will facilitate the analysis of future connectomes. Although the details of brain organization differ across the animal kingdom, many circuit architectures are conserved. As more brain connectomes of other organisms are mapped in the future, comparisons between them will reveal both common and therefore potentially optimal circuit architectures, as well as the idiosyncratic ones that underlie behavioral differences between organisms. Some of the architectural features observed in the Drosophila larval brain, including multilayer shortcuts and prominent nested recurrent loops, are found in state-of-the-art artificial neural networks, where they can compensate for a lack of network depth and support arbitrary, task-dependent computations. Such features could therefore increase the brain’s computational capacity, overcoming physiological constraints on the number of neurons. Future analysis of similarities and differences between brains and artificial neural networks may help in understanding brain computational principles and perhaps inspire new machine learning architectures. The connectome of the Drosophila larval brain. The morphologies of all brain neurons, reconstructed from a synapse-resolution EM volume, and the synaptic connectivity matrix of an entire brain. This connectivity information was used to hierarchically cluster all brains into 93 cell types, which were internally consistent based on morphology and known function. 
    more » « less
  4. A central goal in neuroscience is to understand how dynamic networks of neural activity produce effective representations of the world. Advances in the theory of graph measures raise the possibility of elucidating network topologies central to the construction of these representations. We leverage a result from the description of lollipop graphs to identify an iconic network topology in functional magnetic resonance imaging data and characterize changes to those networks during task performance and in populations diagnosed with psychiatric disorders. During task performance, we find that task-relevant subnetworks change topology, becoming more integrated by increasing connectivity throughout cortex. Analysis of resting-state connectivity in clinical populations shows a similar pattern of subnetwork topology changes; resting-scans becoming less default-like with more integrated sensory paths. The study of brain network topologies and their relationship to cognitive models of information processing raises new opportunities for understanding brain function and its disorders. 
    more » « less
  5. Background

    Cognitive training may partially reverse cognitive deficits in people with HIV (PWH). Previous functional MRI (fMRI) studies demonstrate that working memory training (WMT) alters brain activity during working memory tasks, but its effects on resting brain network organization remain unknown.

    Purpose

    To test whether WMT affects PWH brain functional connectivity in resting‐state fMRI (rsfMRI).

    Study Type

    Prospective.

    Population

    A total of 53 PWH (ages 50.7 ± 1.5 years, two women) and 53HIV‐seronegative controls (SN, ages 49.5 ± 1.6 years, six women).

    Field Strength/Sequence

    Axial single‐shot gradient‐echo echo‐planar imaging at 3.0 T was performed at baseline (TL1), at 1‐month (TL2), and at 6‐months (TL3), after WMT.

    Assessment

    All participants had rsfMRI and clinical assessments (including neuropsychological tests) at TL1 before randomization to Cogmed WMT (adaptive training,n = 58: 28 PWH, 30 SN; nonadaptive training,n = 48: 25 PWH, 23 SN), 25 sessions over 5–8 weeks. All assessments were repeated at TL2 and at TL3. The functional connectivity estimated by independent component analysis (ICA) or graph theory (GT) metrics (eigenvector centrality, etc.) for different link densities (LDs) were compared between PWH and SN groups at TL1 and TL2.

    Statistical Tests

    Two‐way analyses of variance (ANOVA) on GT metrics and two‐samplet‐tests on FC or GT metrics were performed. Cognitive (eg memory) measures were correlated with eigenvector centrality (eCent) using Pearson's correlations. The significance level was set atP < 0.05 after false discovery rate correction.

    Results

    The ventral default mode network (vDMN) eCent differed between PWH and SN groups at TL1 but not at TL2 (P = 0.28). In PWH, vDMN eCent changes significantly correlated with changes in the memory ability in PWH (r = −0.62 at LD = 50%) and vDMN eCent before training significantly correlated with memory performance changes (r = 0.53 at LD = 50%).

    Data Conclusion

    ICA and GT analyses showed that adaptive WMT normalized graph properties of the vDMN in PWH.

    Evidence Level

    1

    Technical Efficacy

    1

     
    more » « less