Latent manifolds provide a compact characterization of neural population activity and of shared co-variability across brain areas. Nonetheless, existing statistical tools for extracting neural manifolds face limitations in terms of interpretability of latents with respect to task variables, and can be hard to apply to datasets with no trial repeats. Here we propose a novel probabilistic framework that allows for interpretable partitioning of population variability within and across areas in the context of naturalistic behavior. Our approach for task aligned manifold estimation (TAME-GP) extends a probabilistic variant of demixed PCA by (1) explicitly partitioning variability into private and shared sources, (2) using a Poisson noise model, and (3) introducing temporal smoothing of latent trajectories in the form of a Gaussian Process prior. This TAME-GP graphical model allows for robust estimation of task-relevant variability in local population responses, and of shared co-variability between brain areas. We demonstrate the efficiency of our estimator on within model and biologically motivated simulated data. We also apply it to neural recordings in a closed-loop virtual navigation task in monkeys, demonstrating the capacity of TAME-GP to capture meaningful intra- and inter-area neural variability with single trial resolution.
more »
« less
Modeling Variability in Brain Architecture with Deep Feature Learning
The brain has long been divided into distinct areas based upon its local microstructure, or patterned composition of cells, genes, and proteins. While this taxonomy is incredibly useful and provides an essential roadmap for comparing two brains, there is also immense anatomical variability within areas that must be incorporated into models of brain architecture. In this work we leverage the expressive power of deep neural networks to create a data-driven model of intra- and inter-brain area variability. To this end, we train a convolutional neural network that learns relevant microstructural features directly from brain imagery. We then extract features from the network and fit a simple classifier to them, thus creating a simple, robust, and interpretable model of brain architecture. We further propose and show preliminary results for the use of features from deep neural networks in conjunction with unsupervised learning techniques to find fine-grained structure within brain areas. We apply our methods to micron-scale X-ray microtomography images spanning multiple regions in the mouse brain and demonstrate that our deep feature-based model can reliably discriminate between brain areas, is robust to noise, and can be used to reveal anatomically relevant patterns in neural architecture that the network wasn't trained to find.
more »
« less
- Award ID(s):
- 1755871
- PAR ID:
- 10167494
- Date Published:
- Journal Name:
- 2019 53rd Asilomar Conference on Signals, Systems, and Computers
- Page Range / eLocation ID:
- 1186 to 1191
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Deep learning has been applied to magnetic resonance imaging (MRI) for a variety of purposes, ranging from the acceleration of image acquisition and image denoising to tissue segmentation and disease diagnosis. Convolutional neural networks have been particularly useful for analyzing MRI data due to the regularly sampled spatial and temporal nature of the data. However, advances in the field of brain imaging have led to network- and surface-based analyses that are often better represented in the graph domain. In this analysis, we propose a general purpose cortical segmentation method that, given resting-state connectivity features readily computed during conventional MRI pre-processing and a set of corresponding training labels, can generate cortical parcellations for new MRI data. We applied recent advances in the field of graph neural networks to the problem of cortical surface segmentation, using resting-state connectivity to learn discrete maps of the human neocortex. We found that graph neural networks accurately learn low-dimensional representations of functional brain connectivity that can be naturally extended to map the cortices of new datasets. After optimizing over algorithm type, network architecture, and training features, our approach yielded mean classification accuracies of 79.91% relative to a previously published parcellation. We describe how some hyperparameter choices including training and testing data duration, network architecture, and algorithm choice affect model performance.more » « less
-
Schizophrenia is a severe brain disorder with serious symptoms including delusions, disorganized speech, and hallucinations that can have a long-term detrimental impact on different aspects of a patient's life. It is still unclear what the main cause of schizophrenia is, but a combination of altered brain connectivity and structure may play a role. Neuroimaging data has been useful in characterizing schizophrenia, but there has been very little work focused on voxel-wise changes in multiple brain networks over time, despite evidence that functional networks exhibit complex spatiotemporal changes over time within individual subjects. Recent studies have primarily focused on static (average) features of functional data or on temporal variations between fixed networks; however, such approaches are not able to capture multiple overlapping networks which change at the voxel level. In this work, we employ a deep residual convolutional neural network (CNN) model to extract 53 different spatiotemporal networks each of which captures dynamism within various domains including subcortical, cerebellar, visual, sensori-motor, auditory, cognitive control, and default mode. We apply this approach to study spatiotemporal brain dynamism at the voxel level within multiple functional networks extracted from a large functional magnetic resonance imaging (fMRI) dataset of individuals with schizophrenia (N= 708) and controls (N= 510). Our analysis reveals widespread group level differences across multiple networks and spatiotemporal features including voxel-wise variability, magnitude, and temporal functional network connectivity in widespread regions expected to be impacted by the disorder. We compare with typical average spatial amplitude and show highly structured and neuroanatomically relevant results are missed if one does not consider the voxel-wise spatial dynamics. Importantly, our approach can summarize static, temporal dynamic, spatial dynamic, and spatiotemporal dynamics features, thus proving a powerful approach to unify and compare these various perspectives. In sum, we show the proposed approach highlights the importance of accounting for both temporal and spatial dynamism in whole brain neuroimaging data generally, shows a high-level of sensitivity to schizophrenia highlighting global but spatially unique dynamics showing group differences, and may be especially important in studies focused on the development of brain-based biomarkers.more » « less
-
Deep neural networks, in particular convolutional neural networks, have become highly effective tools for compressing images and solving inverse problems including denoising, inpainting, and reconstruction from few and noisy measurements. This success can be attributed in part to their ability to represent and generate natural images well. Contrary to classical tools such as wavelets, image-generating deep neural networks have a large number of parameters---typically a multiple of their output dimension---and need to be trained on large datasets. In this paper, we propose an untrained simple image model, called the deep decoder, which is a deep neural network that can generate natural images from very few weight parameters. The deep decoder has a simple architecture with no convolutions and fewer weight parameters than the output dimensionality. This underparameterization enables the deep decoder to compress images into a concise set of network weights, which we show is on par with wavelet-based thresholding. Further, underparameterization provides a barrier to overfitting, allowing the deep decoder to have state-of-the-art performance for denoising. The deep decoder is simple in the sense that each layer has an identical structure that consists of only one upsampling unit, pixel-wise linear combination of channels, ReLU activation, and channelwise normalization. This simplicity makes the network amenable to theoretical analysis, and it sheds light on the aspects of neural networks that enable them to form effective signal representations.more » « less
-
We propose a simple architecture for deep reinforcement learning by embedding inputs into a learned Fourier basis and show that it improves the sample efficiency of both state-based and image-based RL. We perform an infinite-width analysis of our architecture using the Neural Tangent Kernel and theoretically show that tuning the initial variance of the Fourier basis is equivalent to functional regularization of the learned deep network. That is, these learned Fourier features allow for adjusting the degree to which networks underfit or overfit different frequencies in the training data, and hence provide a controlled mechanism to improve the stability and performance of RL optimization. Empirically, this allows us to prioritize learning low-frequency functions and speed up learning by reducing networks' susceptibility to noise in the optimization process, such as during Bellman updates. Experiments on standard state-based and image-based RL benchmarks show clear benefits of our architecture over the baselines. Code available at https://github.com/alexlioralexli/learned-fourier-featuresmore » « less
An official website of the United States government

