skip to main content


Title: What Does Dorsal Cortex Contribute to Perception?
According to the influential “Two Visual Pathways” hypothesis, the cortical visual system is segregated into two pathways, with the ventral, occipitotemporal pathway subserving object perception, and the dorsal, occipitoparietal pathway subserving the visuomotor control of action. However, growing evidence suggests that the dorsal pathway also plays a functional role in object perception. In the current article, we present evidence that the dorsal pathway contributes uniquely to the perception of a range of visuospatial attributes that are not redundant with representations in ventral cortex. We describe how dorsal cortex is recruited automatically during perception, even when no explicit visuomotor response is required. Importantly, we propose that dorsal cortex may selectively process visual attributes that can inform the perception of potential actions on objects and environments, and we consider plausible developmental and cognitive mechanisms that might give rise to these representations. As such, we consider whether naturalistic stimuli, such as real-world solid objects, might engage dorsal cortex more so than simplified or artificial stimuli such as images that do not afford action, and how the use of suboptimal stimuli might limit our understanding of the functional contribution of dorsal cortex to visual perception.  more » « less
Award ID(s):
1632849
PAR ID:
10222678
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Open Mind
Volume:
4
ISSN:
2470-2986
Page Range / eLocation ID:
40 to 56
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Despite their anatomical and functional distinctions, there is growing evidence that the dorsal and ventral visual pathways interact to support object recognition. However, the exact nature of these interactions remains poorly understood. Is the presence of identity-relevant object information in the dorsal pathway simply a byproduct of ventral input? Or, might the dorsal pathway be a source of input to the ventral pathway for object recognition? In the current study, we used high-density EEG—a technique with high temporal precision and spatial resolution sufficient to distinguish parietal and temporal lobes—to characterise the dynamics of dorsal and ventral pathways during object viewing. Using multivariate analyses, we found that category decoding in the dorsal pathway preceded that in the ventral pathway. Importantly, the dorsal pathway predicted the multivariate responses of the ventral pathway in a time-dependent manner, rather than the other way around. Together, these findings suggest that the dorsal pathway is a critical source of input to the ventral pathway for object recognition.

     
    more » « less
  2. Neural processing of objects with action associations recruits dorsal visual regions more than the neural processing of objects without such associations. We hypothesized that because the dorsal and ventral visual pathways have differing proportions of magno- and parvocellular input, there should be behavioral differences in perceptual tasks between manipulable and nonmanipulable objects. This hypothesis was tested in college-age adults across five experiments ( Ns = 26, 26, 30, 25, and 25) using a gap-detection task, suited to the spatial resolution of parvocellular processing, and an object-flicker-discrimination task, suited to the temporal resolution of magnocellular processing. Directly predicted from the cellular composition of each pathway, a strong nonmanipulable-object advantage was observed in gap detection, and a small manipulable-object advantage was observed in flicker discrimination. Additionally, these effects were modulated by reducing object recognition through inversion and by suppressing magnocellular processing using red light. These results establish perceptual differences between objects dependent on semantic knowledge.

     
    more » « less
  3. Oh, A ; Naumann, T ; Globerson, A ; Saenko, K ; Hardt, M ; Levine, S (Ed.)
    The human visual system uses two parallel pathways for spatial processing and object recognition. In contrast, computer vision systems tend to use a single feedforward pathway, rendering them less robust, adaptive, or efficient than human vision. To bridge this gap, we developed a dual-stream vision model inspired by the human eyes and brain. At the input level, the model samples two complementary visual patterns to mimic how the human eyes use magnocellular and parvocellular retinal ganglion cells to separate retinal inputs to the brain. At the backend, the model processes the separate input patterns through two branches of convolutional neural networks (CNN) to mimic how the human brain uses the dorsal and ventral cortical pathways for parallel visual processing. The first branch (WhereCNN) samples a global view to learn spatial attention and control eye movements. The second branch (WhatCNN) samples a local view to represent the object around the fixation. Over time, the two branches interact recurrently to build a scene representation from moving fixations. We compared this model with the human brains processing the same movie and evaluated their functional alignment by linear transformation. The WhereCNN and WhatCNN branches were found to differentially match the dorsal and ventral pathways of the visual cortex, respectively, primarily due to their different learning objectives, rather than their distinctions in retinal sampling or sensitivity to attention-driven eye movements. These model-based results lead us to speculate that the distinct responses and representations of the ventral and dorsal streams are more influenced by their distinct goals in visual attention and object recognition than by their specific bias or selectivity in retinal inputs. This dual-stream model takes a further step in brain-inspired computer vision, enabling parallel neural networks to actively explore and understand the visual surroundings. 
    more » « less
  4. Many goal-directed actions that require rapid visuomotor planning and perceptual decision-making are affected in older adults, causing difficulties in execution of many functional activities of daily living. Visuomotor planning and perceptual identification are mediated by the dorsal and ventral visual streams, respectively, but it is unclear how age-induced changes in sensory processing in these streams contribute to declines in visuomotor decision-making performance. Previously, we showed that in young adults, task demands influenced movement strategies during visuomotor decision-making, reflecting differential integration of sensory information between the two streams. Here, we asked the question if older adults would exhibit deficits in interactions between the two streams during demanding motor tasks. Older adults ( n = 15) and young controls ( n = 26) performed reaching or interception movements toward virtual objects. In some blocks of trials, participants also had to select an appropriate movement goal based on the shape of the object. Our results showed that older adults corrected fewer initial decision errors during both reaching and interception movements. During the interception decision task, older adults made more decision- and execution-related errors than young adults, which were related to early initiation of their movements. Together, these results suggest that older adults have a reduced ability to integrate new perceptual information to guide online action, which may reflect impaired ventral-dorsal stream interactions. NEW & NOTEWORTHY Older adults show declines in vision, decision-making, and motor control, which can lead to functional limitations. We used a rapid visuomotor decision task to examine how these deficits may interact to affect task performance. Compared with healthy young adults, older adults made more errors in both decision-making and motor execution, especially when the task required intercepting moving targets. This suggests that age-related declines in integrating perceptual and motor information may contribute to functional deficits. 
    more » « less
  5. null (Ed.)
    Abstract Coding for visual stimuli in the ventral stream is known to be invariant to object identity preserving nuisance transformations. Indeed, much recent theoretical and experimental work suggests that the main challenge for the visual cortex is to build up such nuisance invariant representations. Recently, artificial convolutional networks have succeeded in both learning such invariant properties and, surprisingly, predicting cortical responses in macaque and mouse visual cortex with unprecedented accuracy. However, some of the key ingredients that enable such success—supervised learning and the backpropagation algorithm—are neurally implausible. This makes it difficult to relate advances in understanding convolutional networks to the brain. In contrast, many of the existing neurally plausible theories of invariant representations in the brain involve unsupervised learning, and have been strongly tied to specific plasticity rules. To close this gap, we study an instantiation of simple-complex cell model and show, for a broad class of unsupervised learning rules (including Hebbian learning), that we can learn object representations that are invariant to nuisance transformations belonging to a finite orthogonal group. These findings may have implications for developing neurally plausible theories and models of how the visual cortex or artificial neural networks build selectivity for discriminating objects and invariance to real-world nuisance transformations. 
    more » « less