skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Fitting Splines to Axonal Arbors Quantifies Relationship Between Branch Order and Geometry
Neuromorphology is crucial to identifying neuronal subtypes and understanding learning. It is also implicated in neurological disease. However, standard morphological analysis focuses on macroscopic features such as branching frequency and connectivity between regions, and often neglects the internal geometry of neurons. In this work, we treat neuron trace points as a sampling of differentiable curves and fit them with a set of branching B-splines. We designed our representation with the Frenet-Serret formulas from differential geometry in mind. The Frenet-Serret formulas completely characterize smooth curves, and involve two parameters, curvature and torsion. Our representation makes it possible to compute these parameters from neuron traces in closed form. These parameters are defined continuously along the curve, in contrast to other parameters like tortuosity which depend on start and end points. We applied our method to a dataset of cortical projection neurons traced in two mouse brains, and found that the parameters are distributed differently between primary, collateral, and terminal axon branches, thus quantifying geometric differences between different components of an axonal arbor. The results agreed in both brains, further validating our representation. The code used in this work can be readily applied to neuron traces in SWC format and is available in our open-source Python package brainlit : http://brainlit.neurodata.io/ .  more » « less
Award ID(s):
2014862
PAR ID:
10331116
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Frontiers in Neuroinformatics
Volume:
15
ISSN:
1662-5196
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Recent advances in brain clearing and imaging have made it possible to image entire mammalian brains at sub-micron resolution. These images offer the potential to assemble brain-wide atlases of neuron morphology, but manual neuron reconstruction remains a bottleneck. Several automatic reconstruction algorithms exist, but most focus on single neuron images. In this paper, we present a probabilistic reconstruction method, ViterBrain, which combines a hidden Markov state process that encodes neuron geometry with a random field appearance model of neuron fluorescence. ViterBrain utilizes dynamic programming to compute the global maximizer of what we call the most probable neuron path. We applied our algorithm to imperfect image segmentations, and showed that it can follow axons in the presence of noise or nearby neurons. We also provide an interactive framework where users can trace neurons by fixing start and endpoints. ViterBrain is available in our open-source Python package . 
    more » « less
  2. Abstract We consider the problem of finding an accurate representation of neuron shapes, extracting sub-cellular features, and classifying neurons based on neuron shapes. In neuroscience research, the skeleton representation is often used as a compact and abstract representation of neuron shapes. However, existing methods are limited to getting and analyzing “curve” skeletons which can only be applied for tubular shapes. This paper presents a 3D neuron morphology analysis method for more general and complex neuron shapes. First, we introduce the concept of skeleton mesh to represent general neuron shapes and propose a novel method for computing mesh representations from 3D surface point clouds. A skeleton graph is then obtained from skeleton mesh and is used to extract sub-cellular features. Finally, an unsupervised learning method is used to embed the skeleton graph for neuron classification. Extensive experiment results are provided and demonstrate the robustness of our method to analyze neuron morphology. 
    more » « less
  3. Principal component analysis of cylindrical neighborhoods is proposed to study the local geometry of embedded Riemannian manifolds. At every generic point and scale, a highdimensional cylinder orthogonal to the tangent space at the point cuts out a path-connected patch whose point-set distribution in ambient space encodes the intrinsic and extrinsic curvature. The covariance matrix of the points from that neighborhood has eigenvectors whose scale limit tends to the Frenet-Serret frame for curves, and to what we call the Ricci-Weingarten principal directions for submanifolds. More importantly, the limit of differences and products of eigenvalues can be used to recover curvature information at the point. The formula for hypersurfaces in terms of principal curvatures is particularly simple and plays a crucial role in the study of higher-codimension cases. 
    more » « less
  4. Principal component analysis of cylindrical neighborhoods is proposed to study the local geometry of embedded Riemannian manifolds. At every generic point and scale, a highdimensional cylinder orthogonal to the tangent space at the point cuts out a path-connected patch whose point-set distribution in ambient space encodes the intrinsic and extrinsic curvature. The covariance matrix of the points from that neighborhood has eigenvectors whose scale limit tends to the Frenet-Serret frame for curves, and to what we call the Ricci-Weingarten principal directions for submanifolds. More importantly, the limit of differences and products of eigenvalues can be used to recover curvature information at the point. The formula for hypersurfaces in terms of principal curvatures is particularly simple and plays a crucial role in the study of higher-codimension cases. 
    more » « less
  5. Morrison, Abigail (Ed.)
    Assessing directional influences between neurons is instrumental to understand how brain circuits process information. To this end, Granger causality, a technique originally developed for time-continuous signals, has been extended to discrete spike trains. A fundamental assumption of this technique is that the temporal evolution of neuronal responses must be due only to endogenous interactions between recorded units, including self-interactions. This assumption is however rarely met in neurophysiological studies, where the response of each neuron is modulated by other exogenous causes such as, for example, other unobserved units or slow adaptation processes. Here, we propose a novel point-process Granger causality technique that is robust with respect to the two most common exogenous modulations observed in real neuronal responses: within-trial temporal variations in spiking rate and between-trial variability in their magnitudes. This novel method works by explicitly including both types of modulations into the generalized linear model of the neuronal conditional intensity function (CIF). We then assess the causal influence of neuron i onto neuron j by measuring the relative reduction of neuron j ’s point process likelihood obtained considering or removing neuron i . CIF’s hyper-parameters are set on a per-neuron basis by minimizing Akaike’s information criterion. In synthetic data sets, generated by means of random processes or networks of integrate-and-fire units, the proposed method recovered with high accuracy, sensitivity and robustness the underlying ground-truth connectivity pattern. Application of presently available point-process Granger causality techniques produced instead a significant number of false positive connections. In real spiking responses recorded from neurons in the monkey pre-motor cortex (area F5), our method revealed many causal relationships between neurons as well as the temporal structure of their interactions. Given its robustness our method can be effectively applied to real neuronal data. Furthermore, its explicit estimate of the effects of unobserved causes on the recorded neuronal firing patterns can help decomposing their temporal variations into endogenous and exogenous components. 
    more » « less