skip to main content


Title: Brain computation by assemblies of neurons
Our expanding understanding of the brain at the level of neurons and synapses, and the level of cognitive phenomena such as language, leaves a formidable gap between these two scales. Here we introduce a computational system which promises to bridge this gap: the Assembly Calculus. It encompasses operations on assemblies of neurons, such as project, associate, and merge, which appear to be implicated in cognitive phenomena, and can be shown, analytically as well as through simulations, to be plausibly realizable at the level of neurons and synapses. We demonstrate the reach of this system by proposing a brain architecture for syntactic processing in the production of language, compatible with recent experimental results.  more » « less
Award ID(s):
1717349
NSF-PAR ID:
10208049
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Proceedings of the National Academy of Sciences of the United States of America
Volume:
117
Issue:
25
ISSN:
0027-8424
Page Range / eLocation ID:
14464-14472
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Our expanding understanding of the brain at the level of neurons and synapses, and the level of cognitive phenomena such as language, leaves a formidable gap between these two scales. Here we introduce a computational system which promises to bridge this gap: the Assembly Calculus. It encompasses operations on assemblies of neurons, such as project, associate, and merge, which appear to be implicated in cognitive phenomena, and can be shown, analytically as well as through simulations, to be plausibly realizable at the level of neurons and synapses. We demonstrate the reach of this system by proposing a brain architecture for syntactic processing in the production of language, compatible with recent experimental results. Abstract Assemblies are large populations of neurons believed to imprint memories, concepts, words, and other cognitive information. We identify a repertoire of operations on assemblies. These operations correspond to properties of assemblies observed in experiments, and can be shown, analytically and through simulations, to be realizable by generic, randomly connected populations of neurons with Hebbian plasticity and inhibition. Assemblies and their operations constitute a computational model of the brain which we call the Assembly Calculus, occupying a level of detail intermediate between the level of spiking neurons and synapses and that of the whole brain. The resulting computational system can be shown, under assumptions, to be, in principle, capable of carrying out arbitrary computations. We hypothesize that something like it may underlie higher human cognitive functions such as reasoning, planning, and language. In particular, we propose a plausible brain architecture based on assemblies for implementing the syntactic processing of language in cortex, which is consistent with recent experimental results. 
    more » « less
  2. An assembly is a large population of neurons whose synchronous firing represents a memory, concept, word, and other cognitive category. Assemblies are believed to provide a bridge between high-level cognitive phenomena and low-level neural activity. Recently, a computational system called the \emph{Assembly Calculus} (AC), with a repertoire of biologically plausible operations on assemblies, has been shown capable of simulating arbitrary space-bounded computation, but also of simulating complex cognitive phenomena such as language, reasoning, and planning. However, the mechanism whereby assemblies can mediate {\em learning} has not been known. Here we present such a mechanism, and prove rigorously that, for simple classification problems defined on distributions of labeled assemblies, a new assembly representing each class can be reliably formed in response to a few stimuli from the class; this assembly is henceforth reliably recalled in response to new stimuli from the same class. Furthermore, such class assemblies will be distinguishable as long as the respective classes are reasonably separated — for example, when they are clusters of similar assemblies, or more generally separable with margin by a linear threshold function. To prove these results, we draw on random graph theory with dynamic edge weights to estimate sequences of activated vertices, yielding strong generalizations of previous calculations and theorems in this field over the past five years. These theorems are backed up by experiments demonstrating the successful formation of assemblies which represent concept classes on synthetic data drawn from such distributions, and also on MNIST, which lends itself to classification through one assembly per digit. Seen as a learning algorithm, this mechanism is entirely online, generalizes from very few samples, and requires only mild supervision — all key attributes of learning in a model of the brain. We argue that this learning mechanism, supported by separate sensory pre-processing mechanisms for extracting attributes, such as edges or phonemes, from real world data, can be the basis of biological learning in cortex. 
    more » « less
  3. Abstract

    Various neurophysiological and cognitive functions are based on transferring information between spiking neurons via a complex system of synaptic connections. In particular, the capacity of presynaptic inputs to influence the postsynaptic outputs–the efficacy of the synapses–plays a principal role in all aspects of hippocampal neurophysiology. However, a direct link between the information processed at the level of individual synapses and the animal’s ability to form memories at the organismal level has not yet been fully understood. Here, we investigate the effect of synaptic transmission probabilities on the ability of the hippocampal place cell ensembles to produce a cognitive map of the environment. Using methods from algebraic topology, we find that weakening synaptic connections increase spatial learning times, produce topological defects in the large-scale representation of the ambient space and restrict the range of parameters for which place cell ensembles are capable of producing a map with correct topological structure. On the other hand, the results indicate a possibility of compensatory phenomena, namely that spatial learning deficiencies may be mitigated through enhancement of neuronal activity.

     
    more » « less
  4. null (Ed.)
    Transmitter signalling is the universal chemical language of any nervous system, but little is known about its early evolution. Here, we summarize data about the distribution and functions of neurotransmitter systems in basal metazoans as well as outline hypotheses of their origins. We explore the scenario that neurons arose from genetically different populations of secretory cells capable of volume chemical transmission and integration of behaviours without canonical synapses. The closest representation of this primordial organization is currently found in Placozoa, disk-like animals with the simplest known cell composition but complex behaviours. We propose that injury-related signalling was the evolutionary predecessor for integrative functions of early transmitters such as nitric oxide, ATP, protons, glutamate and small peptides. By contrast, acetylcholine, dopamine, noradrenaline, octopamine, serotonin and histamine were recruited as canonical neurotransmitters relatively later in animal evolution, only in bilaterians. Ligand-gated ion channels often preceded the establishment of novel neurotransmitter systems. Moreover, lineage-specific diversification of neurotransmitter receptors occurred in parallel within Cnidaria and several bilaterian lineages, including acoels. In summary, ancestral diversification of secretory signal molecules provides unique chemical microenvironments for behaviour-driven innovations that pave the way to complex brain functions and elementary cognition. This article is part of the theme issue ‘Basal cognition: multicellularity, neurons and the cognitive lens'. 
    more » « less
  5. Abstract

    The fields of brain‐inspired computing, robotics, and, more broadly, artificial intelligence (AI) seek to implement knowledge gleaned from the natural world into human‐designed electronics and machines. In this review, the opportunities presented by complex oxides, a class of electronic ceramic materials whose properties can be elegantly tuned by doping, electron interactions, and a variety of external stimuli near room temperature, are discussed. The review begins with a discussion of natural intelligence at the elementary level in the nervous system, followed by collective intelligence and learning at the animal colony level mediated by social interactions. An important aspect highlighted is the vast spatial and temporal scales involved in learning and memory. The focus then turns to collective phenomena, such as metal‐to‐insulator transitions (MITs), ferroelectricity, and related examples, to highlight recent demonstrations of artificial neurons, synapses, and circuits and their learning. First‐principles theoretical treatments of the electronic structure, and in situ synchrotron spectroscopy of operating devices are then discussed. The implementation of the experimental characteristics into neural networks and algorithm design is then revewed. Finally, outstanding materials challenges that require a microscopic understanding of the physical mechanisms, which will be essential for advancing the frontiers of neuromorphic computing, are highlighted.

     
    more » « less