Our expanding understanding of the brain at the level of neurons and synapses, and the level of cognitive phenomena such as language, leaves a formidable gap between these two scales. Here we introduce a computational system which promises to bridge this gap: the Assembly Calculus. It encompasses operations on assemblies of neurons, such as project, associate, and merge, which appear to be implicated in cognitive phenomena, and can be shown, analytically as well as through simulations, to be plausibly realizable at the level of neurons and synapses. We demonstrate the reach of this system by proposing a brain architecture for syntactic processing in the production of language, compatible with recent experimental results. Abstract Assemblies are large populations of neurons believed to imprint memories, concepts, words, and other cognitive information. We identify a repertoire of operations on assemblies. These operations correspond to properties of assemblies observed in experiments, and can be shown, analytically and through simulations, to be realizable by generic, randomly connected populations of neurons with Hebbian plasticity and inhibition. Assemblies and their operations constitute a computational model of the brain which we call the Assembly Calculus, occupying a level of detail intermediate between the level of spiking neurons and synapses and that of the whole brain. The resulting computational system can be shown, under assumptions, to be, in principle, capable of carrying out arbitrary computations. We hypothesize that something like it may underlie higher human cognitive functions such as reasoning, planning, and language. In particular, we propose a plausible brain architecture based on assemblies for implementing the syntactic processing of language in cortex, which is consistent with recent experimental results.
more »
« less
Brain computation by assemblies of neurons
Our expanding understanding of the brain at the level of neurons and synapses, and the level of cognitive phenomena such as language, leaves a formidable gap between these two scales. Here we introduce a computational system which promises to bridge this gap: the Assembly Calculus. It encompasses operations on assemblies of neurons, such as project, associate, and merge, which appear to be implicated in cognitive phenomena, and can be shown, analytically as well as through simulations, to be plausibly realizable at the level of neurons and synapses. We demonstrate the reach of this system by proposing a brain architecture for syntactic processing in the production of language, compatible with recent experimental results.
more »
« less
- Award ID(s):
- 1717349
- PAR ID:
- 10208049
- Date Published:
- Journal Name:
- Proceedings of the National Academy of Sciences of the United States of America
- Volume:
- 117
- Issue:
- 25
- ISSN:
- 0027-8424
- Page Range / eLocation ID:
- 14464-14472
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
An assembly is a large population of neurons whose synchronous firing represents a memory, concept, word, and other cognitive category. Assemblies are believed to provide a bridge between high-level cognitive phenomena and low-level neural activity. Recently, a computational system called the \emph{Assembly Calculus} (AC), with a repertoire of biologically plausible operations on assemblies, has been shown capable of simulating arbitrary space-bounded computation, but also of simulating complex cognitive phenomena such as language, reasoning, and planning. However, the mechanism whereby assemblies can mediate {\em learning} has not been known. Here we present such a mechanism, and prove rigorously that, for simple classification problems defined on distributions of labeled assemblies, a new assembly representing each class can be reliably formed in response to a few stimuli from the class; this assembly is henceforth reliably recalled in response to new stimuli from the same class. Furthermore, such class assemblies will be distinguishable as long as the respective classes are reasonably separated — for example, when they are clusters of similar assemblies, or more generally separable with margin by a linear threshold function. To prove these results, we draw on random graph theory with dynamic edge weights to estimate sequences of activated vertices, yielding strong generalizations of previous calculations and theorems in this field over the past five years. These theorems are backed up by experiments demonstrating the successful formation of assemblies which represent concept classes on synthetic data drawn from such distributions, and also on MNIST, which lends itself to classification through one assembly per digit. Seen as a learning algorithm, this mechanism is entirely online, generalizes from very few samples, and requires only mild supervision — all key attributes of learning in a model of the brain. We argue that this learning mechanism, supported by separate sensory pre-processing mechanisms for extracting attributes, such as edges or phonemes, from real world data, can be the basis of biological learning in cortex.more » « less
-
Abstract During recent decades, our understanding of the brain has advanced dramatically at both the cellular and molecular levels and at the cognitive neurofunctional level; however, a huge gap remains between the microlevel of physiology and the macrolevel of cognition. We propose that computational models based on assemblies of neurons can serve as a blueprint for bridging these two scales. We discuss recently developed computational models of assemblies that have been demonstrated to mediate higher cognitive functions such as the processing of simple sentences, to be realistically realizable by neural activity, and to possess general computational power.more » « less
-
null (Ed.)Transmitter signalling is the universal chemical language of any nervous system, but little is known about its early evolution. Here, we summarize data about the distribution and functions of neurotransmitter systems in basal metazoans as well as outline hypotheses of their origins. We explore the scenario that neurons arose from genetically different populations of secretory cells capable of volume chemical transmission and integration of behaviours without canonical synapses. The closest representation of this primordial organization is currently found in Placozoa, disk-like animals with the simplest known cell composition but complex behaviours. We propose that injury-related signalling was the evolutionary predecessor for integrative functions of early transmitters such as nitric oxide, ATP, protons, glutamate and small peptides. By contrast, acetylcholine, dopamine, noradrenaline, octopamine, serotonin and histamine were recruited as canonical neurotransmitters relatively later in animal evolution, only in bilaterians. Ligand-gated ion channels often preceded the establishment of novel neurotransmitter systems. Moreover, lineage-specific diversification of neurotransmitter receptors occurred in parallel within Cnidaria and several bilaterian lineages, including acoels. In summary, ancestral diversification of secretory signal molecules provides unique chemical microenvironments for behaviour-driven innovations that pave the way to complex brain functions and elementary cognition. This article is part of the theme issue ‘Basal cognition: multicellularity, neurons and the cognitive lens'.more » « less
-
Most organisms on Earth possess an internal timekeeping system which ensures that bodily processes such as sleep, wakefulness or digestion take place at the right time. These precise daily rhythms are kept in check by a master clock in the brain. There, thousands of neurons – some of which carrying an internal ‘molecular clock’ – connect to each other through structures known as synapses. Exactly how the resulting network is organised to support circadian timekeeping remains unclear. To explore this question, Shafer, Gutierrez et al. focused on fruit flies, as recent efforts have systematically mapped every neuron and synaptic connection in the brain of this model organism. Analysing available data from the hemibrain connectome project at Janelia revealed that that the neurons with the most important timekeeping roles were in fact forming the fewest synapses within the network. In addition, neurons without internal molecular clocks mediated strong synaptic connections between those that did, suggesting that ‘clockless’ cells still play an integral role in circadian timekeeping. With this research, Shafer, Gutierrez et al. provide unexpected insights into the organisation of the master body clock. Better understanding the networks that underpin circadian rhythms will help to grasp how and why these are disrupted in obesity, depression and Alzheimer’s disease.more » « less
An official website of the United States government

