skip to main content


Title: View cells in the hippocampus and prefrontal cortex of macaques during virtual navigation
Abstract

Cells selectively activated by a particular view of an environment have been found in the primate hippocampus (HPC). Whether view cells are present in other brain areas, and how view selectivity interacts with other variables such as object features and place remain unclear. Here, we explore these issues by recording the responses of neurons in the HPC and the lateral prefrontal cortex (LPFC) of rhesus macaques performing a task in which they learn new context‐object associations while navigating a virtual environment using a joystick. We measured neuronal responses at different locations in a virtual maze where animals freely directed gaze to different regions of the visual scenes. We show that specific views containing task relevant objects selectively activated a proportion of HPC units, and an even higher proportion of LPFC units. Place selectivity was scarce and generally dependent on view. Many view cells were not affected by changing the object color or the context cue, two task relevant features. However, a small proportion of view cells showed selectivity for these two features. Our results show that during navigation in a virtual environment with complex and dynamic visual stimuli, view cells are found in both the HPC and the LPFC. View cells may have developed as a multiarea specialization in diurnal primates to encode the complexities and layouts of the environment through gaze exploration which ultimately enables building cognitive maps of space that guide navigation.

 
more » « less
PAR ID:
10419118
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  ;  ;  ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
Hippocampus
Volume:
33
Issue:
5
ISSN:
1050-9631
Page Range / eLocation ID:
p. 573-585
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Visual virtual reality (VR) systems for head-fixed mice offer advantages over real-world studies for investigating the neural circuitry underlying behavior. However, current VR approaches do not fully cover the visual field of view of mice, do not stereoscopically illuminate the binocular zone, and leave the lab frame visible. To overcome these limitations, we developed iMRSIV (Miniature Rodent Stereo Illumination VR)—VR goggles for mice. Our system is compact, separately illuminates each eye for stereo vision, and provides each eye with an ∼180° field of view, thus excluding the lab frame while accommodating saccades. Mice using iMRSIV while navigating engaged in virtual behaviors more quickly than in a current monitor-based system and displayed freezing and fleeing reactions to overhead looming stimulation. Using iMRSIV with two-photon functional imaging, we found large populations of hippocampal place cells during virtual navigation, global remapping during environment changes, and unique responses of place cell ensembles to overhead looming stimulation. 
    more » « less
  2. Abstract

    How do newborns learn to recognize objects? According to temporal learning models in computational neuroscience, the brain constructs object representations by extracting smoothly changing features from the environment. To date, however, it is unknown whether newborns depend on smoothly changing features to build invariant object representations. Here, we used an automated controlled‐rearing method to examine whether visual experience with smoothly changing features facilitates the development of view‐invariant object recognition in a newborn animal model—the domestic chick (Gallus gallus). When newborn chicks were reared with a virtual object that moved smoothly over time, the chicks created view‐invariant representations that were selective for object identity and tolerant to viewpoint changes. Conversely, when newborn chicks were reared with a temporally non‐smooth object, the chicks developed less selectivity for identity features and less tolerance to viewpoint changes. These results provide evidence for a “smoothness constraint” on the development of invariant object recognition and indicate that newborns leverage the temporal smoothness of natural visual environments to build abstract mental models of objects.

     
    more » « less
  3. null (Ed.)
    Mobile Augmented Reality (AR) provides immersive experiences by aligning virtual content (holograms) with a view of the real world. When a user places a hologram it is usually expected that like a real object, it remains in the same place. However, positional errors frequently occur due to inaccurate environment mapping and device localization, to a large extent determined by the properties of natural visual features in the scene. In this demonstration we present SceneIt, the first visual environment rating system for mobile AR based on predictions of hologram positional error magnitude. SceneIt allows users to determine if virtual content placed in their environment will drift noticeably out of position, without requiring them to place that content. It shows that the severity of positional error for a given visual environment is predictable, and that this prediction can be calculated with sufficiently high accuracy and low latency to be useful in mobile AR applications. 
    more » « less
  4. Abstract

    Investigations into how individual neurons encode behavioral variables of interest have revealed specific representations in single neurons, such as place and object cells, as well as a wide range of cells with conjunctive encodings or mixed selectivity. However, as most experiments examine neural activity within individual tasks, it is currently unclear if and how neural representations change across different task contexts. Within this discussion, the medial temporal lobe is particularly salient, as it is known to be important for multiple behaviors including spatial navigation and memory, however the relationship between these functions is currently unclear. Here, to investigate how representations in single neurons vary across different task contexts in the medial temporal lobe, we collected and analyzed single‐neuron activity from human participants as they completed a paired‐task session consisting of a passive‐viewing visual working memory and a spatial navigation and memory task. Five patients contributed 22 paired‐task sessions, which were spike sorted together to allow for the same putative single neurons to be compared between the different tasks. Within each task, we replicated concept‐related activations in the working memory task, as well as target‐location and serial‐position responsive cells in the navigation task. When comparing neuronal activity between tasks, we first established that a significant number of neurons maintained the same kind of representation, responding to stimuli presentations across tasks. Further, we found cells that changed the nature of their representation across tasks, including a significant number of cells that were stimulus responsive in the working memory task that responded to serial position in the spatial task. Overall, our results support a flexible encoding of multiple, distinct aspects of different tasks by single neurons in the human medial temporal lobe, whereby some individual neurons change the nature of their feature coding between task contexts.

     
    more » « less
  5. Specific features of visual objects innately draw approach responses in animals, and provide natural signals of potential reward. However, visual sampling behaviours and the detection of salient, rewarding stimuli are context and behavioural state-dependent and it remains unclear how visual perception and orienting responses change with specific expectations. To start to address this question, we employed a virtual stimulus orienting paradigm based on prey capture to quantify the conditional expression of visual stimulus-evoked innate approaches in freely moving mice. We found that specific combinations of stimulus features selectively evoked innate approach or freezing responses when stimuli were unexpected. We discovered that prey capture experience, and therefore the expectation of prey in the environment, selectively modified approach frequency, as well as altered those visual features that evoked approach. Thus, we found that mice exhibit robust and selective orienting responses to parameterized visual stimuli that can be robustly and specifically modified via natural experience. This work provides critical insight into how natural appetitive behaviours are driven by both specific features of visual motion and internal states that alter stimulus salience.

     
    more » « less