skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Quantifying Information Conveyed by Large Neuronal Populations
Quantifying mutual information between inputs and outputs of a large neural circuit is an important open problem in both machine learning and neuroscience. However, evaluation of the mutual information is known to be generally intractable for large systems due to the exponential growth in the number of terms that need to be evaluated. Here we show how information contained in the responses of large neural populations can be effectively computed provided the input-output functions of individual neurons can be measured and approximated by a logistic function applied to a potentially nonlinear function of the stimulus. Neural responses in this model can remain sensitive to multiple stimulus components. We show that the mutual information in this model can be effectively approximated as a sum of lower-dimensional conditional mutual information terms. The approximations become exact in the limit of large neural populations and for certain conditions on the distribution of receptive fields across the neural population. We empirically find that these approximations continue to work well even when the conditions on the receptive field distributions are not fulfilled. The computing cost for the proposed methods grows linearly in the dimension of the input and compares favorably with other approximations.  more » « less
Award ID(s):
1724421
PAR ID:
10120229
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Neural Computation
Volume:
31
Issue:
6
ISSN:
0899-7667
Page Range / eLocation ID:
1015 to 1047
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. There are many studies of approximations using deep neural networks. In this paper, the authors provide yet another proof that deep neural networks are universal approximators. In their earlier work, the authors showed that an arbitrary binary target function can be effectively rewritten in terms of a set of strings, or a set of subsets, and that a single hidden neuron can identify and only identify a single string or a single subset. Therefore, an arbitrary binary target function can be effectively rewritten in the form of a neural network with one hidden layer. In this study, the authors imposed locality on the deep neural network, and will show here that an arbitrary binary target function can be effectively rewritten in the form of a locally connected deep neural network that can have many hidden layers. Although it will increase the neural network size, as a neural network is localized, it will generally increase the speed of training for large networks 
    more » « less
  2. A key aspect of the neural coding problem is understanding how representations of afferent stimuli are built through the dynamics of learning and adaptation within neural networks. The infomax paradigm is built on the premise that such learning attempts to maximize the mutual information between input stimuli and neural activities. In this letter, we tackle the problem of such information-based neural coding with an eye toward two conceptual hurdles. Specifically, we examine and then show how this form of coding can be achieved with online input processing. Our framework thus obviates the biological incompatibility of optimization methods that rely on global network awareness and batch processing of sensory signals. Central to our result is the use of variational bounds as a surrogate objective function, an established technique that has not previously been shown to yield online policies. We obtain learning dynamics for both linear-continuous and discrete spiking neural encoding models under the umbrella of linear gaussian decoders. This result is enabled by approximating certain information quantities in terms of neuronal activity via pairwise feedback mechanisms. Furthermore, we tackle the problem of how such learning dynamics can be realized with strict energetic constraints. We show that endowing networks with auxiliary variables that evolve on a slower timescale can allow for the realization of saddle-point optimization within the neural dynamics, leading to neural codes with favorable properties in terms of both information and energy. 
    more » « less
  3. Perception can be highly dependent on stimulus context, but whether and how sensory areas encode the context remains uncertain. We used an ambiguous auditory stimulus – a tritone pair – to investigate the neural activity associated with a preceding contextual stimulus that strongly influenced the tritone pair’s perception: either as an ascending or a descending step in pitch. We recorded single-unit responses from a population of auditory cortical cells in awake ferrets listening to the tritone pairs preceded by the contextual stimulus. We find that the responses adapt locally to the contextual stimulus, consistent with human MEG recordings from the auditory cortex under the same conditions. Decoding the population responses demonstrates that cells responding to pitch-changes are able to predict well the context-sensitive percept of the tritone pairs. Conversely, decoding the individual pitch representations and taking their distance in the circular Shepard tone space predicts theoppositeof the percept. The various percepts can be readily captured and explained by a neural model of cortical activity based on populations of adapting, pitch and pitch-direction cells, aligned with the neurophysiological responses. Together, these decoding and model results suggest that contextual influences on perception may well be already encoded at the level of the primary sensory cortices, reflecting basic neural response properties commonly found in these areas. 
    more » « less
  4. Abstract Neural responses evoked by a stimulus reduce upon repetition. While this adaptation allows the sensory system to attend to novel cues, does information about the recurring stimulus particularly its intensity get compromised? We explored this issue in the locust olfactory system. We found that locusts’ innate behavioral response to odorants varied with repetition and stimulus intensity. Counter-intuitively, the stimulus-intensity dependent differences became significant only after adaptation had set in. Adaptation also altered responses of individual neurons in the antennal lobe (neural network downstream to insect antenna). These response variations to repetitions of the same stimulus were unpredictable and inconsistent across intensities. Although both adaptation and intensity decrements resulted in an overall reduction in spiking activities across neurons, these changes could be disentangled and information about stimulus intensity robustly maintained by ensemble neural responses. In sum, these results show how information about odor intensity can be preserved in an adaptation-invariant manner. 
    more » « less
  5. Prefrontal cortex modulates sensory signals in extrastriate visual cortex, in part via its direct projections from the frontal eye field (FEF), an area involved in selective attention. We find that working memory-related activity is a dominant signal within FEF input to visual cortex. Although this signal alone does not evoke spiking responses in areas V4 and MT during memory, the gain of visual responses in these areas increases, and neuronal receptive fields expand and shift towards the remembered location, improving the stimulus representation by neuronal populations. These results provide a basis for enhancing the representation of working memory targets and implicate persistent FEF activity as a basis for the interdependence of working memory and selective attention. 
    more » « less