skip to main content

This content will become publicly available on December 1, 2023

Title: Hidden Markov modeling for maximum probability neuron reconstruction
Abstract Recent advances in brain clearing and imaging have made it possible to image entire mammalian brains at sub-micron resolution. These images offer the potential to assemble brain-wide atlases of neuron morphology, but manual neuron reconstruction remains a bottleneck. Several automatic reconstruction algorithms exist, but most focus on single neuron images. In this paper, we present a probabilistic reconstruction method, ViterBrain, which combines a hidden Markov state process that encodes neuron geometry with a random field appearance model of neuron fluorescence. ViterBrain utilizes dynamic programming to compute the global maximizer of what we call the most probable neuron path. We applied our algorithm to imperfect image segmentations, and showed that it can follow axons in the presence of noise or nearby neurons. We also provide an interactive framework where users can trace neurons by fixing start and endpoints. ViterBrain is available in our open-source Python package .
Authors:
; ; ; ;
Award ID(s):
2014862
Publication Date:
NSF-PAR ID:
10331117
Journal Name:
Communications Biology
Volume:
5
Issue:
1
ISSN:
2399-3642
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider the task of measuring time with probabilistic threshold gates implemented by bio-inspired spiking neurons. In the model of spiking neural networks, network evolves in discrete rounds, where in each round, neurons fire in pulses in response to a sufficiently high membrane potential. This potential is induced by spikes from neighboring neurons that fired in the previous round, which can have either an excitatory or inhibitory effect. Discovering the underlying mechanisms by which the brain perceives the duration of time is one of the largest open enigma in computational neuroscience. To gain a better algorithmic understanding onto these processes,more »we introduce the neural timer problem. In this problem, one is given a time parameter t, an input neuron x, and an output neuron y. It is then required to design a minimum sized neural network (measured by the number of auxiliary neurons) in which every spike from x in a given round i, makes the output y fire for the subsequent t consecutive rounds.We first consider a deterministic implementation of a neural timer and show that Θ(logt)(deterministic) threshold gates are both sufficient and necessary. This raised the question of whether randomness can be leveraged to reduce the number of neurons. We answer this question in the affirmative by considering neural timers with spiking neurons where the neuron y is required to fire for t consecutive rounds with probability at least 1−δ, and should stop firing after at most 2 t rounds with probability 1−δ for some input parameter δ∈(0,1). Our key result is a construction of a neural timer with O(log log 1/δ) spiking neurons. Interestingly, this construction uses only one spiking neuron, while the remaining neurons can be deterministic threshold gates. We complement this construction with a matching lower bound of Ω(min{log log 1/δ,logt}) neurons. This provides the first separation between deterministic and randomized constructions in the setting of spiking neural networks.Finally, we demonstrate the usefulness of compressed counting networks for synchronizing neural networks. In the spirit of distributed synchronizers [Awerbuch-Peleg, FOCS’90], we provide a general transformation (or simulation) that can take any synchronized network solution and simulate it in an asynchronous setting (where edges have arbitrary response latencies) while incurring a small overhead w.r.t the number of neurons and computation time.« less
  2. Morrison, Abigail (Ed.)
    Assessing directional influences between neurons is instrumental to understand how brain circuits process information. To this end, Granger causality, a technique originally developed for time-continuous signals, has been extended to discrete spike trains. A fundamental assumption of this technique is that the temporal evolution of neuronal responses must be due only to endogenous interactions between recorded units, including self-interactions. This assumption is however rarely met in neurophysiological studies, where the response of each neuron is modulated by other exogenous causes such as, for example, other unobserved units or slow adaptation processes. Here, we propose a novel point-process Granger causality techniquemore »that is robust with respect to the two most common exogenous modulations observed in real neuronal responses: within-trial temporal variations in spiking rate and between-trial variability in their magnitudes. This novel method works by explicitly including both types of modulations into the generalized linear model of the neuronal conditional intensity function (CIF). We then assess the causal influence of neuron i onto neuron j by measuring the relative reduction of neuron j ’s point process likelihood obtained considering or removing neuron i . CIF’s hyper-parameters are set on a per-neuron basis by minimizing Akaike’s information criterion. In synthetic data sets, generated by means of random processes or networks of integrate-and-fire units, the proposed method recovered with high accuracy, sensitivity and robustness the underlying ground-truth connectivity pattern. Application of presently available point-process Granger causality techniques produced instead a significant number of false positive connections. In real spiking responses recorded from neurons in the monkey pre-motor cortex (area F5), our method revealed many causal relationships between neurons as well as the temporal structure of their interactions. Given its robustness our method can be effectively applied to real neuronal data. Furthermore, its explicit estimate of the effects of unobserved causes on the recorded neuronal firing patterns can help decomposing their temporal variations into endogenous and exogenous components.« less
  3. Reconstructed terabyte and petabyte electron microscopy image volumes contain fully-segmented neurons at resolutions fine enough to identify every synaptic connection. After manual or automatic reconstruction, neuroscientists want to extract wiring diagrams and connectivity information to analyze the data at a higher level. Despite significant advances in image acquisition, neuron segmentation, and synapse detection techniques, the extracted wiring diagrams are still quite coarse, and often do not take into account the wealth of information in the densely reconstructed volumes. We propose a synapse-aware skeleton generation strategy to transform the reconstructed volumes into an information-rich yet abstract format on which neuroscientists canmore »perform biological analysis and run simulations. Our method extends existing topological thinning strategies and guarantees a one-to-one correspondence between skeleton endpoints and synapses while simultaneously generating vital geometric statistics on the neuronal processes. We demonstrate our results on three large-scale connectomic datasets and compare against current state-of-the-art skeletonization algorithms.« less
  4. Understanding the intricacies of the brain often requires spotting and tracking specific neurons over time and across different individuals. For instance, scientists may need to precisely monitor the activity of one neuron even as the brain moves and deforms; or they may want to find universal patterns by comparing signals from the same neuron across different individuals. Both tasks require matching which neuron is which in different images and amongst a constellation of cells. This is theoretically possible in certain ‘model’ animals where every single neuron is known and carefully mapped out. Still, it remains challenging: neurons move relative tomore »one another as the animal changes posture, and the position of a cell is also slightly different between individuals. Sophisticated computer algorithms are increasingly used to tackle this problem, but they are far too slow to track neural signals as real-time experiments unfold. To address this issue, Yu et al. designed a new algorithm based on the Transformer, an artificial neural network originally used to spot relationships between words in sentences. To learn relationships between neurons, the algorithm was fed hundreds of thousands of ‘semi-synthetic’ examples of constellations of neurons. Instead of painfully collated actual experimental data, these datasets were created by a simulator based on a few simple measurements. Testing the new algorithm on the tiny worm Caenorhabditis elegans revealed that it was faster and more accurate, finding corresponding neurons in about 10ms. The work by Yu et al. demonstrates the power of using simulations rather than experimental data to train artificial networks. The resulting algorithm can be used immediately to help study how the brain of C. elegans makes decisions or controls movements. Ultimately, this research could allow brain-machine interfaces to be developed.« less
  5. Abstract Many studies of brain aging and neurodegenerative disorders such as Alzheimer’s and Parkinson’s diseases require rapid counts of high signal: noise (S:N) stained brain cells such as neurons and neuroglial (microglia cells) on tissue sections. To increase throughput efficiency of this work, we have combined deep learned (DL) neural networks and computerized stereology (DL-stereology) for automatic cell counts with low error (<10%) compared to time-intensive manual counts. To date, however, this approach has been limited to sections with a single high S:N immunostain for neurons (NeuN) or microglial cells (Iba-1). The present study expands this approach to protocols thatmore »combine immunostains with counterstains, e.g., cresyl violet (CV). In our method, a stain separation technique called Sparse Non-negative Matrix Factorization (SNMF) converts a dual-stained color image to a single gray image showing only the principal immunostain. Validation testing was done using semi- and automatic stereology-based counts of sections immunostained for neurons or microglia with CV counterstaining from the neocortex of a transgenic mouse model of tauopathy (Tg4510 mouse) and controls. Cell count results with principal stain gray images show an average error rate of 16.78% and 28.47% for the semi-automatic approach and 8.51% and 9.36% for the fully-automatic DL-stereology approach for neurons and microglia, respectively, as compared to manual cell counts (ground truth). This work indicates that stain separation by SNMF can support high throughput, fully automatic DL-stereology based counts of neurons and microglia on counterstained tissue sections.« less