skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Population encoding of stimulus features along the visual hierarchy
The retina and primary visual cortex (V1) both exhibit diverse neural populations sensitive to diverse visual features. Yet it remains unclear how neural populations in each area partition stimulus space to span these features. One possibility is that neural populations are organized into discrete groups of neurons, with each group signaling a particular constellation of features. Alternatively, neurons could be continuously distributed across feature-encoding space. To distinguish these possibilities, we presented a battery of visual stimuli to the mouse retina and V1 while measuring neural responses with multi-electrode arrays. Using machine learning approaches, we developed a manifold embedding technique that captures how neural populations partition feature space and how visual responses correlate with physiological and anatomical properties of individual neurons. We show that retinal populations discretely encode features, while V1 populations provide a more continuous representation. Applying the same analysis approach to convolutional neural networks that model visual processing, we demonstrate that they partition features much more similarly to the retina, indicating they are more like big retinas than little brains.  more » « less
Award ID(s):
1822598
PAR ID:
10552812
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
National Academy of Sciences
Date Published:
Journal Name:
Proceedings of the National Academy of Sciences
Volume:
121
Issue:
4
ISSN:
0027-8424
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. What are the fundamental principles that inform representation in the primate visual brain? While objects have become an intuitive framework for studying neurons in many parts of cortex, it is possible that neurons follow a more expressive organizational principle, such as encoding generic features present across textures, places, and objects. In this study, we used multielectrode arrays to record from neurons in the early (V1/V2), middle (V4), and later [posterior inferotemporal (PIT) cortex] areas across the visual hierarchy, estimating each neuron’s local operation across natural scene via “heatmaps.” We found that, while populations of neurons with foveal receptive fields across V1/V2, V4, and PIT responded over the full scene, they focused on salient subregions within object outlines. Notably, neurons preferentially encoded animal features rather than general objects, with this trend strengthening along the visual hierarchy. These results show that the monkey ventral stream is partially organized to encode local animal features over objects, even as early as primary visual cortex. 
    more » « less
  2. null (Ed.)
    Two interleaved stimulus sets were identical except for the background. In one, the flow stimuli background was the mid-gray of the interstimulus interval (equal background, eqbg), leading to a change of 9-10% in the space-average luminance. In the other, the space-average luminance of the entire stimulus field was adjusted to a constant (equal luminance, eqlum) within 0.5%; i.e., the background was slightly lightened when the dots in the flow were dark, and darkened when the dots were bright. Most cortical cells appeared to respond similarly to the two stimulus sets, as if stimulus structure mattered but not the background change, while the responses of most retinal ganglion cells appeared to differ between the two conditions. Machine learning algorithms confirmed this quantitatively. A manifold embedding of neurons to the two stimulus sets was constructed using diffusion maps. In this manifold, the responses of the same cell to eqlum and eqbg stimuli were significantly closer to one another for V1 rather than for the retina. Geometrically, the median ratio of the distance between the responses of each cell to the two stimulus sets as compared to the distance to the closest cell on the manifold was 3.5 for V1 compared to 12.7 for retina. Topologically, the fraction of cells for which the responses of the same cell to the two stimulus sets were connected in the diffusion map datagraph was 53% for V1 but only 9% for retina; when retina and cortex were co-embedded in the manifold, these fractions were 44% and 6%. While retina and cortex differ on average, it will be intriguing to determine whether particular classes of retinal cells behave more like V1 neurons, and vice versa. 
    more » « less
  3. The integration of synaptic inputs onto dendrites provides the basis for computation within individual neurons. Whereas recent studies have begun to outline the spatial organization of synaptic inputs on individual neurons, the underlying principles related to the specific neural functions is not well known. Here we performed two-photon dendritic imaging with genetically-encoded glutamate sensor in awake monkeys, and successfully mapped the excitatory synaptic inputs on dendrites of individual V1 neurons with high spatial and temporal resolution. We found that although synaptic inputs on dendrites were functionally clustered by feature, they were highly scattered in multidimensional feature space, providing a potential substrate of local feature integration on dendritic branches. We also found that nearly all individual neurons received both abundant orientation-selective and color-selective inputs. Furthermore, we found apical dendrites received more diverse inputs than basal dendrites, with larger receptive fields, and relatively longer response latencies, suggesting a specific apical role in integrating feedback in visual information processing. 
    more » « less
  4. We have created encoding manifolds to reveal the overall responses of a brain area to a variety of stimuli. Encoding manifolds organize response properties globally: each point on an encoding manifold is a neuron, and nearby neurons respond similarly to the stimulus ensemble in time. We previously found, using a large stimulus ensemble including optic flows, that encoding manifolds for the retina were highly clustered, with each cluster corresponding to a different ganglion cell type. In contrast, the topology of the V1 manifold was continuous. Now, using responses of individual neurons from the Allen Institute Visual Coding-Neuropixels dataset in the mouse, we infer encoding manifolds for V1 and for five higher cortical visual areas (VISam, VISal, VISpm, VISlm, and VISrl). We show here that the encoding manifold topology computed only from responses to various grating stimuli is also continuous, not only for V1 but also for the higher visual areas, with smooth coordinates spanning it that include, among others, orientation selectivity and firing-rate magnitude. Surprisingly, the encoding manifold for gratings also provides information about natural scene responses. To investigate whether neurons respond more strongly to gratings or natural scenes, we plot the log ratio of natural scene responses to grating responses (mean firing rates) on the encoding manifold. This reveals a global coordinate axis organizing neurons' preferences between these two stimuli. This coordinate is orthogonal (i.e., uncorrelated) to that organizing firing rate magnitudes in VISp. Analyzing layer responses, a preference for gratings is concentrated in layer 6, whereas preference for natural scenes tends to be higher in layers 2/3 and 4. We also find that preference for natural scenes dominates the responses of neurons that prefer low (0.02 cpd) and high (0.32 cpd) spatial frequencies, rather than intermediate ones (0.04 to 0.16 cpd). Conclusion: while gratings seem limited and natural scenes unconstrained, machine learning algorithms can reveal subtle relationships between them beyond linear techniques. 
    more » « less
  5. Abstract In the primate visual system, visual object recognition involves a series of cortical areas arranged hierarchically along the ventral visual pathway. As information flows through this hierarchy, neurons become progressively tuned to more complex image features. The circuit mechanisms and computations underlying the increasing complexity of these receptive fields (RFs) remain unidentified. To understand how this complexity emerges in the secondary visual area (V2), we investigated the functional organization of inputs from the primary visual cortex (V1) to V2 by combining retrograde anatomical tracing of these inputs with functional imaging of feature maps in macaque monkey V1 and V2. We found that V1 neurons sending inputs to single V2 orientation columns have a broad range of preferred orientations, but are strongly biased towards the orientation represented at the injected V2 site. For each V2 site, we then constructed a feedforward model based on the linear combination of its anatomically- identified large-scale V1 inputs, and studied the response proprieties of the generated V2 RFs. We found that V2 RFs derived from the linear feedforward model were either elongated versions of V1 filters or had spatially complex structures. These modeled RFs predicted V2 neuron responses to oriented grating stimuli with high accuracy. Remarkably, this simple model also explained the greater selectivity to naturalistic textures of V2 cells compared to their V1 input cells. Our results demonstrate that simple linear combinations of feedforward inputs can account for the orientation selectivity and texture sensitivity of V2 RFs. 
    more » « less