skip to main content


Title: Emergence of direction-selective retinal cell types in task-optimized deep learning models
Convolutional neural networks (CNNs), a class of deep learning models, have experienced recent success in modeling sensory cortices and retinal circuits through optimizing performance on machine learning tasks, otherwise known as task optimization. Previous research has shown task-optimized CNNs to be capable of providing explanations as to why the retina efficiently encodes natural stimuli and how certain retinal cell types are involved in efficient encoding. In our work, we sought to use task-optimized CNNs as a means of explaining computational mechanisms responsible for motion-selective retinal circuits. We designed a biologically constrained CNN and optimized its performance on a motion-classification task. We drew inspiration from psychophysics, deep learning, and systems neuroscience literature to develop a toolbox of methods to reverse engineer the computational mechanisms learned in our model. Through reverse engineering our model, we proposed a computational mechanism in which direction-selective ganglion cells and starburst amacrine cells, both experimentally observed retinal cell types, emerge in our model to discriminate among moving stimuli. This emergence suggests that direction-selective circuits in the retina are ecologically designed to robustly discriminate among moving stimuli. Our results and methods also provide a framework for how to build more interpretable deep learning models and how to understand them.  more » « less
Award ID(s):
1810758 2003830
NSF-PAR ID:
10324411
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Journal of computational biology
Volume:
29
Issue:
4
ISSN:
1066-5277
Page Range / eLocation ID:
370-381
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. An organizational feature of neural circuits is the specificity of synaptic connections. A striking example is the direction-selective (DS) circuit of the retina. There are multiple subtypes of DS retinal ganglion cells (DSGCs) that prefer motion along one of 4 preferred directions. This computation is mediated by selective wiring of a single inhibitory interneuron, the starburst amacrine cell (SAC), with each DSGC subtype preferentially receiving input from a subset of SAC processes. We hypothesize that the molecular basis of this wiring is mediated in part by unique expression profiles of DSGC subtypes. To test this, we first performed paired recordings from isolated mouse retina of both sexes to determine that postnatal day 10 (P10) represents the age at which asymmetric synapses form. Second, we performed RNA-sequencing and differential expression analysis on isolated P10 ON-OFF DSGCs tuned for either nasal or ventral motion and identified candidates which may promote direction-specific wiring. We then used a conditional knockout strategy to test the role of one candidate, the secreted synaptic organizer cerebellin-4 (Cbln4), in the development of DS tuning. Using two-photon calcium imaging, we observed a small deficit in directional tuning among ventral-preferring DSGCs lacking Cbln4, though whole-cell voltage clamp recordings did not identify a significant change in inhibitory inputs. This suggests that Cbln4 does not function primarily via a cell-autonomous mechanism to instruct wiring of DS circuits. Nevertheless, our transcriptomic analysis identified unique candidate factors for gaining insights into the molecular mechanisms that instruct wiring specificity in the DS circuit.

    Significance StatementBy performing mRNA transcriptome analysis on three populations of direction-selective ganglion cells - two preferring horizontal motion and one preferring vertical motion - we identified differentially expressed candidate molecules potentially involved in cell subtype-specific synaptogenesis within this circuit. We tested the role of one differentially expressed candidate, Cbln4, enriched in ventral-preferring DSGCs. Using a targeted knockout approach, the deletion of Cbln4 led to a small reduction in direction-selective tuning while maintaining dendritic morphology and normal strength and asymmetry of inhibitory synaptic transmission. Overall, we have shown that this approach can be used to identify interesting candidate molecules, and future functional studies are required to reveal the mechanisms by which these candidates influence synaptic wiring within specific circuits.

     
    more » « less
  2. Convolutional neural networks (CNN) are an emerging technique in modeling neural circuits and have been shown to converge to biologically plausible functionality in cortical circuits via task-optimization. This functionality has not been observed in CNN models of retinal circuits via task-optimization. We sought to observe this convergence in retinal circuits by designing a biologically inspired CNN model of a motion-detection retinal circuit and optimizing it to solve a motion-classification task. The learned weights and parameters indicated that the CNN converged to direction-sensitive ganglion and amacrine cells, cell types that have been observed in biology, and provided evidence that task-optimization is a fair method of building retinal models. The analysis used to understand the functionality of our CNN also indicates that biologically constrained deep learning models are easier to reason about their underlying mechanisms than traditional deep learning models. 
    more » « less
  3. Convolutional neural networks (CNN) are an emerging technique in modeling neural circuits and have been shown to converge to biologically plausible functionality in cortical circuits via task-optimization. This functionality has not been observed in CNN models of retinal circuits via task-optimization. We sought to observe this convergence in retinal circuits by designing a biologically inspired CNN model of a motion-detection retinal circuit and optimizing it to solve a motion-classification task. The learned weights and parameters indicated that the CNN converged to direction-sensitive ganglion and amacrine cells, cell types that have been observed in biology, and provided evidence that task-optimization is a fair method of building retinal models. The analysis used to understand the functionality of our CNN also indicates that biologically constrained deep learning models are easier to reason about their underlying mechanisms than traditional deep learning models. 
    more » « less
  4. The optic nerve transmits visual information to the brain as trains of discrete events, a low-power, low-bandwidth communication channel also exploited by silicon retina cameras. Extracting highfidelity visual input from retinal event trains is thus a key challenge for both computational neuroscience and neuromorphic engineering. Here, we investigate whether sparse coding can enable the reconstruction of high-fidelity images and video from retinal event trains. Our approach is analogous to compressive sensing, in which only a random subset of pixels are transmitted and the missing information is estimated via inference. We employed a variant of the Locally Competitive Algorithm to infer sparse representations from retinal event trains, using a dictionary of convolutional features optimized via stochastic gradient descent and trained in an unsupervised manner using a local Hebbian learning rule with momentum. We used an anatomically realistic retinal model with stochastic graded release from cones and bipolar cells to encode thumbnail images as spike trains arising from ON and OFF retinal ganglion cells. The spikes from each model ganglion cell were summed over a 32 msec time window, yielding a noisy rate-coded image. Analogous to how the primary visual cortex is postulated to infer features from noisy spike trains arising from the optic nerve, we inferred a higher-fidelity sparse reconstruction from the noisy rate-coded image using a convolutional dictionary trained on the original CIFAR10 database. To investigate whether a similar approachworks on non-stochastic data, we demonstrate that the same procedure can be used to reconstruct high-frequency video from the asynchronous events arising from a silicon retina camera moving through a laboratory environment. 
    more » « less
  5. Understanding circuit properties from physiological data presents two challenges: (i) recordings do not reveal connectivity, and (ii) stimuli only exercise circuits to a limited extent. We address these challenges for the mouse visual system with a novel neural manifold obtained using unsupervised algorithms. Each point in our manifold is a neuron; nearby neurons respond similarly in time to similar parts of a stimulus ensemble. This ensemble includes drifting gratings and flows, i.e., patterns resembling what a mouse would “see” running through fields. Regarding (i), our manifold differs from the standard practice in computational neuroscience: embedding trials in neural coordinates. Topology matters: we infer that, if the circuit consists of separate components, the manifold is discontinuous (illustrated with retinal data). If there is significant overlap between circuits, the manifold is nearly-continuous (cortical data). Regarding (ii), most of the cortical manifold is not activated with conventional gratings, despite their prominence in laboratory settings. Our manifold suggests organizing cortical circuitry by a few specialized circuits for specific members of the stimulus ensemble, together with circuits involving ‘multi-stimuli’-responding neurons. To approach real circuits, local neighborhoods in the manifold are identified with actual circuit components. For retinal data, we show these components correspond to distinct ganglion cell types by their mosaic-like receptive field organization, while for cortical data, neighborhoods organize neurons by type (excitatory/inhibitory) and anatomical layer. In summary: the topology of neural organization reflects well the underlying anatomy and physiology of the retina and the visual cortex. 
    more » « less