Abstract Investigations into how individual neurons encode behavioral variables of interest have revealed specific representations in single neurons, such as place and object cells, as well as a wide range of cells with conjunctive encodings or mixed selectivity. However, as most experiments examine neural activity within individual tasks, it is currently unclear if and how neural representations change across different task contexts. Within this discussion, the medial temporal lobe is particularly salient, as it is known to be important for multiple behaviors including spatial navigation and memory, however the relationship between these functions is currently unclear. Here, to investigate how representations in single neurons vary across different task contexts in the medial temporal lobe, we collected and analyzed single‐neuron activity from human participants as they completed a paired‐task session consisting of a passive‐viewing visual working memory and a spatial navigation and memory task. Five patients contributed 22 paired‐task sessions, which were spike sorted together to allow for the same putative single neurons to be compared between the different tasks. Within each task, we replicated concept‐related activations in the working memory task, as well as target‐location and serial‐position responsive cells in the navigation task. When comparing neuronal activity between tasks, we first established that a significant number of neurons maintained the same kind of representation, responding to stimuli presentations across tasks. Further, we found cells that changed the nature of their representation across tasks, including a significant number of cells that were stimulus responsive in the working memory task that responded to serial position in the spatial task. Overall, our results support a flexible encoding of multiple, distinct aspects of different tasks by single neurons in the human medial temporal lobe, whereby some individual neurons change the nature of their feature coding between task contexts.
more »
« less
Efficient estimation of neural tuning during naturalistic behavior
Recent technological advances in systems neuroscience have led to a shift away from using simple tasks, with low-dimensional, well-controlled stimuli, towards trying to understand neural activity during naturalistic behavior. However, with the increase in number and complexity of task-relevant features, standard analyses such as estimating tuning functions become challenging. Here, we use a Poisson generalized additive model (P-GAM) with spline nonlinearities and an exponential link function to map a large number of task variables (input stimuli, behavioral outputs, or activity of other neurons, modeled as discrete events or continuous variables) into spike counts. We develop efficient procedures for parameter learning by optimizing a generalized cross-validation score and infer marginal confidence bounds for the contribution of each feature to neural responses. This allows us to robustly identify a minimal set of task features that each neuron is responsive to, circumventing computationally demanding model comparison. We show that our estimation procedure outperforms traditional regularized GLMs in terms of both fit quality and computing time. When applied to neural recordings from monkeys performing a virtual reality spatial navigation task, P-GAM reveals mixed selectivity and preferential coupling between neurons with similar tuning.
more »
« less
- Award ID(s):
- 1922658
- PAR ID:
- 10217553
- Date Published:
- Journal Name:
- Advances in Neural Information Processing Systems 33 (NeurIPS 2020)
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract A central challenge in face perception research is to understand how neurons encode face identities. This challenge has not been met largely due to the lack of simultaneous access to the entire face processing neural network and the lack of a comprehensive multifaceted model capable of characterizing a large number of facial features. Here, we addressed this challenge by conducting in silico experiments using a pre-trained face recognition deep neural network (DNN) with a diverse array of stimuli. We identified a subset of DNN units selective to face identities, and these identity-selective units demonstrated generalized discriminability to novel faces. Visualization and manipulation of the network revealed the importance of identity-selective units in face recognition. Importantly, using our monkey and human single-neuron recordings, we directly compared the response of artificial units with real primate neurons to the same stimuli and found that artificial units shared a similar representation of facial features as primate neurons. We also observed a region-based feature coding mechanism in DNN units as in human neurons. Together, by directly linking between artificial and primate neural systems, our results shed light on how the primate brain performs face recognition tasks.more » « less
-
Abstract Sensory data about most natural task-relevant variables are entangled with task-irrelevant nuisance variables. The neurons that encode these relevant signals typically constitute a nonlinear population code. Here we present a theoretical framework for quantifying how the brain uses or decodes its nonlinear information. Our theory obeys fundamental mathematical limitations on information content inherited from the sensory periphery, describing redundant codes when there are many more cortical neurons than primary sensory neurons. The theory predicts that if the brain uses its nonlinear population codes optimally, then more informative patterns should be more correlated with choices. More specifically, the theory predicts a simple, easily computed quantitative relationship between fluctuating neural activity and behavioral choices that reveals the decoding efficiency. This relationship holds for optimal feedforward networks of modest complexity, when experiments are performed under natural nuisance variation. We analyze recordings from primary visual cortex of monkeys discriminating the distribution from which oriented stimuli were drawn, and find these data are consistent with the hypothesis of near-optimal nonlinear decoding.more » « less
-
null (Ed.)Both experimenter-controlled stimuli and stimulus-independent variables impact cortical neural activity. A major hurdle to understanding neural representation is distinguishing between qualitatively different causes of the fluctuating population activity. We applied an unsupervised low-rank tensor decomposition analysis to the recorded population activity in the visual cortex of awake mice in response to repeated presentations of naturalistic visual stimuli. We found that neurons covaried largely independently of individual neuron stimulus response reliability and thus encoded both stimulus-driven and stimulus-independent variables. Importantly, a neuron’s response reliability and the neuronal coactivation patterns substantially reorganized for different external visual inputs. Analysis of recurrent balanced neural network models revealed that both the stimulus specificity and the mixed encoding of qualitatively different variables can arise from clustered external inputs. These results establish that coactive neurons with diverse response reliability mediate a mixed representation of stimulus-driven and stimulus-independent variables in the visual cortex. NEW & NOTEWORTHY V1 neurons covary largely independently of individual neuron’s response reliability. A single neuron’s response reliability imposes only a weak constraint on its encoding capabilities. Visual stimulus instructs a neuron’s reliability and coactivation pattern. Network models revealed using clustered external inputs.more » « less
-
Time series behavior of gas consumption is highly irregular, non-stationary, and volatile due to its dependency on the weather, users' habits and lifestyle. This complicates the modeling and forecasting of gas consumption with most of the existing time series modeling techniques, specifically when missing values and outliers are present. To demonstrate and overcome these problems, we investigate two approaches to model the gas consumption, namely Generalized Additive Models (GAM) and Long Short-Term Memory (LSTM). We perform our evaluations on two building datasets from two different continents. We present each selected feature's influence, the tuning parameters, and the characteristics of the gas consumption on their forecasting abilities. We compare the performances of GAM and LSTM with other state-of-the-art forecasting approaches. We show that LSTM outperforms GAM and other existing approaches, however, GAM provides better interpretable results for building management systems (BMS).more » « less
An official website of the United States government

