skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, December 13 until 2:00 AM ET on Saturday, December 14 due to maintenance. We apologize for the inconvenience.


Title: Conformal Isometry of Lie Group Representation in Recurrent Network of Grid Cells
The activity of the grid cell population in the medial entorhinal cortex (MEC) of the mammalian brain forms a vector representation of the self-position of the animal. Recurrent neural networks have been proposed to explain the properties of the grid cells by updating the neural activity vector based on the velocity input of the animal. In doing so, the grid cell system effectively performs path integration. In this paper, we investigate the algebraic, geometric, and topological properties of grid cells using recurrent network models. Algebraically, we study the Lie group and Lie algebra of the recurrent transformation as a representation of self-motion. Geometrically, we study the conformal isometry of the Lie group representation where the local displacement of the activity vector in the neural space is proportional to the local displacement of the agent in the 2D physical space. Topologically, the compact abelian Lie group representation automatically leads to the torus topology commonly assumed and observed in neuroscience. We then focus on a simple non-linear recurrent model that underlies the continuous attractor neural networks of grid cells. Our numerical experiments show that conformal isometry leads to hexagon periodic patterns in the grid cell responses and our model is capable of accurate path integration.  more » « less
Award ID(s):
2015577
PAR ID:
10469354
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Symmetry and Geometry in Neural Representations (NeurReps Workshop), Neural Information Processing Systems (NeurIPS 2022)
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Understanding how grid cells perform path integration calculations remains a fundamental problem. In this paper, we conduct theoretical analysis of a general representation model of path integration by grid cells, where the 2D self-position is encoded as a higher dimensional vector, and the 2D self-motion is represented by a general transformation of the vector. We identify two conditions on the transformation. One is a group representation condition that is necessary for path integration. The other is an isotropic scaling condition that ensures locally conformal embedding, so that the error in the vector representation translates conformally to the error in the 2D self-position. Then we investigate the simplest transformation, i.e., the linear transformation, uncover its explicit algebraic and geometric structure as matrix Lie group of rotation, and explore the connection between the isotropic scaling condition and a special class of hexagon grid patterns. Finally, with our optimization-based approach, we manage to learn hexagon grid patterns that share similar properties of the grid cells in the rodent brain. The learned model is capable of accurate long distance path integration. Code is available at https://github.com/ruiqigao/grid-cell-path. 
    more » « less
  2. null (Ed.)
    Grid cells in the brain fire in strikingly regular hexagonal patterns across space. There are currently two seemingly unrelated frameworks for understanding these patterns. Mechanistic models account for hexagonal firing fields as the result of pattern-forming dynamics in a recurrent neural network with hand-tuned center-surround connectivity. Normative models specify a neural architecture, a learning rule, and a navigational task, and observe that grid-like firing fields emerge due to the constraints of solving this task. Here we provide an analytic theory that unifies the two perspectives by casting the learning dynamics of neural networks trained on navigational tasks as a pattern forming dynamical system. This theory provides insight into the optimal solutions of diverse formulations of the normative task, and shows that symmetries in the representation of space correctly predict the structure of learned firing fields in trained neural networks. Further, our theory proves that a nonnegativity constraint on firing rates induces a symmetry-breaking mechanism which favors hexagonal firing fields. We extend this theory to the case of learning multiple grid maps and demonstrate that optimal solutions consist of a hierarchy of maps with increasing length scales. These results unify previous accounts of grid cell firing and provide a novel framework for predicting the learned representations of recurrent neural networks. 
    more » « less
  3. Equivariant representation is necessary for the brain and artificial perceptual systems to faithfully represent the stimulus under some (Lie) group transformations. However, it remains unknown how recurrent neural circuits in the brain represent the stimulus equivariantly, nor the neural representation of abstract group operators. The present study uses a one-dimensional (1D) translation group as an example to explore the general recurrent neural circuit mechanism of the equivariant stimulus representation. We found that a continuous attractor network (CAN), a canonical neural circuit model, self-consistently generates a continuous family of stationary population responses (attractors) that represents the stimulus equivariantly. Inspired by the Drosophila’s compass circuit, we found that the 1D translation operators can be represented by extra speed neurons besides the CAN, where speed neurons’ responses represent the moving speed (1D translation group parameter), and their feedback connections to the CAN represent the translation generator (Lie algebra). We demonstrated that the network responses are consistent with experimental data. Our model for the first time demonstrates how recurrent neural circuitry in the brain achieves equivariant stimulus representation. 
    more » « less
  4. Abstract

    In this note, we present a synopsis of geometric symmetries for (spin 0) perturbations around (4D) black holes and de Sitter space. For black holes, we focus on static perturbations, for which the (exact) geometric symmetries have the group structure of SO(1,3). The generators consist of three spatial rotations, and three conformal Killing vectors obeying a specialmelodiccondition. The static perturbation solutions form a unitary (principal series) representation of the group. The recently uncovered ladder symmetries follow from this representation structure; they explain the well-known vanishing of the black hole Love numbers. For dynamical perturbations around de Sitter space, the geometric symmetries are less surprising, following from the SO(1,4) isometry. As is known, the quasinormal solutions form a non-unitary representation of the isometry group. We provide explicit expressions for the ladder operators associated with this representation. In both cases, the ladder structures help connect the boundary condition at the horizon with that at infinity (black hole) or origin (de Sitter space), and they manifest as contiguous relations of the hypergeometric solutions.

     
    more » « less
  5. Since the first place cell was recorded and the cognitive-map theory was subsequently formulated, investigation of spatial representation in the hippocampal formation has evolved in stages. Early studies sought to verify the spatial nature of place cell activity and determine its sensory origin. A new epoch started with the discovery of head direction cells and the realization of the importance of angular and linear movement-integration in generating spatial maps. A third epoch began when investigators turned their attention to the entorhinal cortex, which led to the discovery of grid cells and border cells. This review will show how ideas about integration of self-motion cues have shaped our understanding of spatial representation in hippocampal–entorhinal systems from the 1970s until today. It is now possible to investigate how specialized cell types of these systems work together, and spatial mapping may become one of the first cognitive functions to be understood in mechanistic detail. 
    more » « less