skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Hidden Markov modeling for maximum probability neuron reconstruction
Abstract Recent advances in brain clearing and imaging have made it possible to image entire mammalian brains at sub-micron resolution. These images offer the potential to assemble brain-wide atlases of neuron morphology, but manual neuron reconstruction remains a bottleneck. Several automatic reconstruction algorithms exist, but most focus on single neuron images. In this paper, we present a probabilistic reconstruction method, ViterBrain, which combines a hidden Markov state process that encodes neuron geometry with a random field appearance model of neuron fluorescence. ViterBrain utilizes dynamic programming to compute the global maximizer of what we call the most probable neuron path. We applied our algorithm to imperfect image segmentations, and showed that it can follow axons in the presence of noise or nearby neurons. We also provide an interactive framework where users can trace neurons by fixing start and endpoints. ViterBrain is available in our open-source Python package .  more » « less
Award ID(s):
2014862
PAR ID:
10331117
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Communications Biology
Volume:
5
Issue:
1
ISSN:
2399-3642
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Larvae of the fruit flyDrosophila melanogasterare a powerful study case for understanding the neural circuits underlying behavior. Indeed, the numerical simplicity of the larval brain has permitted the reconstruction of its synaptic connectome, and genetic tools for manipulating single, identified neurons allow neural circuit function to be investigated with relative ease and precision. We focus on one of the most complex neurons in the brain of the larva (of either sex), the GABAergic anterior paired lateral neuron (APL). Using behavioral and connectomic analyses, optogenetics, Ca2+imaging, and pharmacology, we study how APL affects associative olfactory memory. We first provide a detailed account of the structure, regional polarity, connectivity, and metamorphic development of APL, and further confirm that optogenetic activation of APL has an inhibiting effect on its main targets, the mushroom body Kenyon cells. All these findings are consistent with the previously identified function of APL in the sparsening of sensory representations. To our surprise, however, we found that optogenetically activating APL can also have a strong rewarding effect. Specifically, APL activation together with odor presentation establishes an odor-specific, appetitive, associative short-term memory, whereas naive olfactory behavior remains unaffected. An acute, systemic inhibition of dopamine synthesis as well as an ablation of the dopaminergic pPAM neurons impair reward learning through APL activation. Our findings provide a study case of complex circuit function in a numerically simple brain, and suggest a previously unrecognized capacity of central-brain GABAergic neurons to engage in dopaminergic reinforcement. SIGNIFICANCE STATEMENTThe single, identified giant anterior paired lateral (APL) neuron is one of the most complex neurons in the insect brain. It is GABAergic and contributes to the sparsening of neuronal activity in the mushroom body, the memory center of insects. We provide the most detailed account yet of the structure of APL in larvalDrosophilaas a neurogenetically accessible study case. We further reveal that, contrary to expectations, the experimental activation of APL can exert a rewarding effect, likely via dopaminergic reward pathways. The present study both provides an example of unexpected circuit complexity in a numerically simple brain, and reports an unexpected effect of activity in central-brain GABAergic circuits. 
    more » « less
  2. To fully understand how the human brain works, knowledge of its structure at high resolution is needed. Presented here is a computationally intensive reconstruction of the ultrastructure of a cubic millimeter of human temporal cortex that was surgically removed to gain access to an underlying epileptic focus. It contains about 57,000 cells, about 230 millimeters of blood vessels, and about 150 million synapses and comprises 1.4 petabytes. Our analysis showed that glia outnumber neurons 2:1, oligodendrocytes were the most common cell, deep layer excitatory neurons could be classified on the basis of dendritic orientation, and among thousands of weak connections to each neuron, there exist rare powerful axonal inputs of up to 50 synapses. Further studies using this resource may bring valuable insights into the mysteries of the human brain. 
    more » « less
  3. Understanding the intricacies of the brain often requires spotting and tracking specific neurons over time and across different individuals. For instance, scientists may need to precisely monitor the activity of one neuron even as the brain moves and deforms; or they may want to find universal patterns by comparing signals from the same neuron across different individuals. Both tasks require matching which neuron is which in different images and amongst a constellation of cells. This is theoretically possible in certain ‘model’ animals where every single neuron is known and carefully mapped out. Still, it remains challenging: neurons move relative to one another as the animal changes posture, and the position of a cell is also slightly different between individuals. Sophisticated computer algorithms are increasingly used to tackle this problem, but they are far too slow to track neural signals as real-time experiments unfold. To address this issue, Yu et al. designed a new algorithm based on the Transformer, an artificial neural network originally used to spot relationships between words in sentences. To learn relationships between neurons, the algorithm was fed hundreds of thousands of ‘semi-synthetic’ examples of constellations of neurons. Instead of painfully collated actual experimental data, these datasets were created by a simulator based on a few simple measurements. Testing the new algorithm on the tiny worm Caenorhabditis elegans revealed that it was faster and more accurate, finding corresponding neurons in about 10ms. The work by Yu et al. demonstrates the power of using simulations rather than experimental data to train artificial networks. The resulting algorithm can be used immediately to help study how the brain of C. elegans makes decisions or controls movements. Ultimately, this research could allow brain-machine interfaces to be developed. 
    more » « less
  4. Abstract Neuronal cell death and subsequent brain dysfunction are hallmarks of aging and neurodegeneration, but how the nearby healthy neurons (bystanders) respond to the death of their neighbors is not fully understood. In theDrosophilalarval neuromuscular system, bystander motor neurons can structurally and functionally compensate for the loss of their neighbors by increasing their terminal bouton number and activity. We term this compensation as cross-neuron plasticity, and in this study, we demonstrate that theDrosophilaengulfment receptor, Draper, and the associated kinase, Shark, are required for cross-neuron plasticity. Overexpression of the Draper-I isoform boosts cross-neuron plasticity, implying that the strength of plasticity correlates with Draper signaling. In addition, we find that functional cross-neuron plasticity can be induced at different developmental stages. Our work uncovers a role for Draper signaling in cross-neuron plasticity and provides insights into how healthy bystander neurons respond to the loss of their neighboring neurons. 
    more » « less
  5. Tomaszewski, John E.; Ward, Aaron D. (Ed.)
    Automatic cell quantification in microscopy images can accelerate biomedical research. There has been significant progress in the 3D segmentation of neurons in fluorescence microscopy. However, it remains a challenge in bright-field microscopy due to the low Signal-to-Noise Ratio and signals from out-of-focus neurons. Automatic neuron counting in bright-field z-stacks is often performed on Extended Depth of Field images or on only one thick focal plane image. However, resolving overlapping cells that are located at different z-depths is a challenge. The overlap can be resolved by counting every neuron in its best focus z-plane because of their separation on the z-axis. Unbiased stereology is the state-of-the-art for total cell number estimation. The segmentation boundary for cells is required in order to incorporate the unbiased counting rule for stereology application. Hence, we perform counting via segmentation. We propose to achieve neuron segmentation in the optimal focal plane by posing the binary segmentation task as a multi-class multi-label task. Also, we propose to efficiently use a 2D U-Net for inter-image feature learning in a Multiple Input Multiple Output system that poses a binary segmentation task as a multi-class multi-label segmentation task. We demonstrate the accuracy and efficiency of the MIMO approach using a bright-field microscopy z-stack dataset locally prepared by an expert. The proposed MIMO approach is also validated on a dataset from the Cell Tracking Challenge achieving comparable results to a compared method equipped with memory units. Our z-stack dataset is available at 
    more » « less