We have created encoding manifolds to reveal the overall responses of a brain area to a variety of stimuli. Encoding manifolds organize response properties globally: each point on an encoding manifold is a neuron, and nearby neurons respond similarly to the stimulus ensemble in time. We previously found, using a large stimulus ensemble including optic flows, that encoding manifolds for the retina were highly clustered, with each cluster corresponding to a different ganglion cell type. In contrast, the topology of the V1 manifold was continuous. Now, using responses of individual neurons from the Allen Institute Visual Coding-Neuropixels dataset in the mouse, we infer encoding manifolds for V1 and for five higher cortical visual areas (VISam, VISal, VISpm, VISlm, and VISrl). We show here that the encoding manifold topology computed only from responses to various grating stimuli is also continuous, not only for V1 but also for the higher visual areas, with smooth coordinates spanning it that include, among others, orientation selectivity and firing-rate magnitude. Surprisingly, the encoding manifold for gratings also provides information about natural scene responses. To investigate whether neurons respond more strongly to gratings or natural scenes, we plot the log ratio of natural scene responses to grating responses (mean firing rates) on the encoding manifold. This reveals a global coordinate axis organizing neurons' preferences between these two stimuli. This coordinate is orthogonal (i.e., uncorrelated) to that organizing firing rate magnitudes in VISp. Analyzing layer responses, a preference for gratings is concentrated in layer 6, whereas preference for natural scenes tends to be higher in layers 2/3 and 4. We also find that preference for natural scenes dominates the responses of neurons that prefer low (0.02 cpd) and high (0.32 cpd) spatial frequencies, rather than intermediate ones (0.04 to 0.16 cpd). Conclusion: while gratings seem limited and natural scenes unconstrained, machine learning algorithms can reveal subtle relationships between them beyond linear techniques.
more »
« less
Flow stimuli reveal ecologically appropriate responses in mouse visual cortex
Assessments of the mouse visual system based on spatial-frequency analysis imply that its visual capacity is low, with few neurons responding to spatial frequencies greater than 0.5 cycles per degree. However, visually mediated behaviors, such as prey capture, suggest that the mouse visual system is more precise. We introduce a stimulus class—visual flow patterns—that is more like what the mouse would encounter in the natural world than are sine-wave gratings but is more tractable for analysis than are natural images. We used 128-site silicon microelectrodes to measure the simultaneous responses of single neurons in the primary visual cortex (V1) of alert mice. While holding temporal-frequency content fixed, we explored a class of drifting patterns of black or white dots that have energy only at higher spatial frequencies. These flow stimuli evoke strong visually mediated responses well beyond those predicted by spatial-frequency analysis. Flow responses predominate in higher spatial-frequency ranges (0.15–1.6 cycles per degree), many are orientation or direction selective, and flow responses of many neurons depend strongly on sign of contrast. Many cells exhibit distributed responses across our stimulus ensemble. Together, these results challenge conventional linear approaches to visual processing and expand our understanding of the mouse’s visual capacity to behaviorally relevant ranges.
more »
« less
- PAR ID:
- 10106398
- Date Published:
- Journal Name:
- Proceedings of the National Academy of Sciences
- Volume:
- 115
- Issue:
- 44
- ISSN:
- 0027-8424
- Page Range / eLocation ID:
- 11304 to 11309
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Objectively differentiating patient mental states based on electrical activity, as opposed to overt behavior, is a fundamental neuroscience problem with medical applications, such as identifying patients in locked-in state vs. coma. Electroencephalography (EEG), which detects millisecond-level changes in brain activity across a range of frequencies, allows for assessment of external stimulus processing by the brain in a non-invasive manner. We applied machine learning methods to 26-channel EEG data of 24 fluent Deaf signers watching videos of sign language sentences (comprehension condition), and the same videos reversed in time (non-comprehension condition), to objectively separate vision-based high-level cognition states. While spectrotemporal parameters of the stimuli were identical in comprehension vs. non-comprehension conditions, the neural responses of participants varied based on their ability to linguistically decode visual data. We aimed to determine which subset of parameters (specific scalp regions or frequency ranges) would be necessary and sufficient for high classification accuracy of comprehension state. Optical flow, characterizing distribution of velocities of objects in an image, was calculated for each pixel of stimulus videos using MATLAB Vision toolbox. Coherence between optical flow in the stimulus and EEG neural response (per video, per participant) was then computed using canonical component analysis with NoiseTools toolbox. Peak correlations were extracted for each frequency for each electrode, participant, and video. A set of standard ML algorithms were applied to the entire dataset (26 channels, frequencies from .2 Hz to 12.4 Hz, binned in 1 Hz increments), with consistent out-of-sample 100% accuracy for frequencies in .2-1 Hz range for all regions, and above 80% accuracy for frequencies < 4 Hz. Sparse Optimal Scoring (SOS) was then applied to the EEG data to reduce the dimensionality of the features and improve model interpretability. SOS with elastic-net penalty resulted in out-of-sample classification accuracy of 98.89%. The sparsity pattern in the model indicated that frequencies between 0.2–4 Hz were primarily used in the classification, suggesting that underlying data may be group sparse. Further, SOS with group lasso penalty was applied to regional subsets of electrodes (anterior, posterior, left, right). All trials achieved greater than 97% out-of-sample classification accuracy. The sparsity patterns from the trials using 1 Hz bins over individual regions consistently indicated frequencies between 0.2–1 Hz were primarily used in the classification, with anterior and left regions performing the best with 98.89% and 99.17% classification accuracy, respectively. While the sparsity pattern may not be the unique optimal model for a given trial, the high classification accuracy indicates that these models have accurately identified common neural responses to visual linguistic stimuli. Cortical tracking of spectro-temporal change in the visual signal of sign language appears to rely on lower frequencies proportional to the N400/P600 time-domain evoked response potentials, indicating that visual language comprehension is grounded in predictive processing mechanisms.more » « less
-
The retina and primary visual cortex (V1) both exhibit diverse neural populations sensitive to diverse visual features. Yet it remains unclear how neural populations in each area partition stimulus space to span these features. One possibility is that neural populations are organized into discrete groups of neurons, with each group signaling a particular constellation of features. Alternatively, neurons could be continuously distributed across feature-encoding space. To distinguish these possibilities, we presented a battery of visual stimuli to the mouse retina and V1 while measuring neural responses with multi-electrode arrays. Using machine learning approaches, we developed a manifold embedding technique that captures how neural populations partition feature space and how visual responses correlate with physiological and anatomical properties of individual neurons. We show that retinal populations discretely encode features, while V1 populations provide a more continuous representation. Applying the same analysis approach to convolutional neural networks that model visual processing, we demonstrate that they partition features much more similarly to the retina, indicating they are more like big retinas than little brains.more » « less
-
Abstract To produce consistent sensory perception, neurons must maintain stable representations of sensory input. However, neurons in many regions exhibit progressive drift across days. Longitudinal studies have found stable responses to artificial stimuli across sessions in visual areas, but it is unclear whether this stability extends to naturalistic stimuli. We performed chronic 2-photon imaging of mouse V1 populations to directly compare the representational stability of artificial versus naturalistic visual stimuli over weeks. Responses to gratings were highly stable across sessions. However, neural responses to naturalistic movies exhibited progressive representational drift across sessions. Differential drift was present across cortical layers, in inhibitory interneurons, and could not be explained by differential response strength or higher order stimulus statistics. However, representational drift was accompanied by similar differential changes in local population correlation structure. These results suggest representational stability in V1 is stimulus-dependent and may relate to differences in preexisting circuit architecture of co-tuned neurons.more » « less
-
Abstract Understanding the brain requires understanding neurons’ functional responses to the circuit architecture shaping them. Here we introduce the MICrONS functional connectomics dataset with dense calcium imaging of around 75,000 neurons in primary visual cortex (VISp) and higher visual areas (VISrl, VISal and VISlm) in an awake mouse that is viewing natural and synthetic stimuli. These data are co-registered with an electron microscopy reconstruction containing more than 200,000 cells and 0.5 billion synapses. Proofreading of a subset of neurons yielded reconstructions that include complete dendritic trees as well the local and inter-areal axonal projections that map up to thousands of cell-to-cell connections per neuron. Released as an open-access resource, this dataset includes the tools for data retrieval and analysis1,2. Accompanying studies describe its use for comprehensive characterization of cell types3–6, a synaptic level connectivity diagram of a cortical column4, and uncovering cell-type-specific inhibitory connectivity that can be linked to gene expression data4,7. Functionally, we identify new computational principles of how information is integrated across visual space8, characterize novel types of neuronal invariances9and bring structure and function together to uncover a general principle for connectivity between excitatory neurons within and across areas10,11.more » « less
An official website of the United States government

