Abstract The brain has long been conceptualized as a network of neurons connected by synapses. However, attempts to describe the connectome using established network science models have yielded conflicting outcomes, leaving the architecture of neural networks unresolved. Here, by performing a comparative analysis of eight experimentally mapped connectomes, we find that their degree distributions cannot be captured by the well-established random or scale-free models. Instead, the node degrees and strengths are well approximated by lognormal distributions, although these lack a mechanistic explanation in the context of the brain. By acknowledging the physical network nature of the brain, we show that neuron size is governed by a multiplicative process, which allows us to analytically derive the lognormal nature of the neuron length distribution. Our framework not only predicts the degree and strength distributions across each of the eight connectomes, but also yields a series of novel and empirically falsifiable relationships between different neuron characteristics. The resulting multiplicative network represents a novel architecture for network science, whose distinctive quantitative features bridge critical gaps between neural structure and function, with implications for brain dynamics, robustness, and synchronization.
more »
« less
The neuroconnectionist research programme
Artificial neural networks (ANNs) inspired by biology are beginning to be widely used to model behavioural and neural data, an approach we call ‘neuroconnectionism’. ANNs have been not only lauded as the current best models of information processing inthe brain butalsocriticized for failing to account for basic cognitive functions. In this Perspective article, we propose that arguing about the successes and failures of a restricted set of current ANNs is the wrong approach to assess the promise of neuroconnectionism for brain science. Instead, we take inspiration from the philosophy of science, and in particular from Lakatos, who showed that the core of a scientific research programme is often not directly falsifiable but should be assessed by its capacity to generate novel insights. Following this view, we present neuroconnectionism as a general research programme centred around ANNs as a computational language for expressing falsifiable theories about brain computation. We describe the core of the programme, the underlying computational framework and its tools for testing specific neuroscientific hypotheses and deriving novel understanding. Taking a longitudinal view, we review past and present neuroconnectionist projects and their responses to challenges and argue that the research programme is highly progressive, generating new and otherwise unreachable insights into the workings of the brain.
more »
« less
- Award ID(s):
- 1942438
- PAR ID:
- 10510914
- Publisher / Repository:
- Nature Reviews Neuroscience
- Date Published:
- Journal Name:
- Nature Reviews Neuroscience
- Volume:
- 24
- Issue:
- 7
- ISSN:
- 1471-003X
- Page Range / eLocation ID:
- 431 to 450
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Random dropout has become a standard regularization technique in artificial neural networks (ANNs), but it is currently unknown whether an analogous mechanism exists in biological neural networks (BioNNs). If it does, its structure is likely to be optimized by hundreds of millions of years of evolution, which may suggest novel dropout strategies in large-scale ANNs. We propose that the brain serotonergic fibers (axons) meet some of the expected criteria because of their ubiquitous presence, stochastic structure, and ability to grow throughout the individual’s lifespan. Since the trajectories of serotonergic fibers can be modeled as paths of anomalous diffusion processes, in this proof-of-concept study we investigated a dropout algorithm based on the superdiffusive fractional Brownian motion (FBM). The results demonstrate that serotonergic fibers can potentially implement a dropout-like mechanism in brain tissue, supporting neuroplasticity. They also suggest that mathematical theories of the structure and dynamics of serotonergic fibers can contribute to the design of dropout algorithms in ANNs.more » « less
-
Abstract Neuromorphic computing mimics the organizational principles of the brain in its quest to replicate the brain’s intellectual abilities. An impressive ability of the brain is its adaptive intelligence, which allows the brain to regulate its functions “on the fly” to cope with myriad and ever-changing situations. In particular, the brain displays three adaptive and advanced intelligence abilities of context-awareness, cross frequency coupling, and feature binding. To mimic these adaptive cognitive abilities, we design and simulate a novel, hardware-based adaptive oscillatory neuron using a lattice of magnetic skyrmions. Charge current fed to the neuron reconfigures the skyrmion lattice, thereby modulating the neuron’s state, its dynamics and its transfer function “on the fly.” This adaptive neuron is used to demonstrate the three cognitive abilities, of which context-awareness and cross-frequency coupling have not been previously realized in hardware neurons. Additionally, the neuron is used to construct an adaptive artificial neural network (ANN) and perform context-aware diagnosis of breast cancer. Simulations show that the adaptive ANN diagnoses cancer with higher accuracy while learning faster and using a more compact and energy-efficient network than a nonadaptive ANN. The work further describes how hardware-based adaptive neurons can mitigate several critical challenges facing contemporary ANNs. Modern ANNs require large amounts of training data, energy, and chip area, and are highly task-specific; conversely, hardware-based ANNs built with adaptive neurons show faster learning, compact architectures, energy-efficiency, fault-tolerance, and can lead to the realization of broader artificial intelligence.more » « less
-
ABSTRACT Physics forms the core of any Materials Science Programme at undergraduate level. Knowing the properties of materials is fundamental to developing and designing new materials and new applications for known materials. “Physical Physics” is a physics education approach which is an innovative and promising instruction model that integrates physical activity with mechanics and material properties. It aims to significantly enhance the learning experience and to illustrate how physics works, while allowing students to be active participants and take ownership of the learning process. It has been successfully piloted with undergraduate students studying mechanics on a Games Development Programme. It is a structured guided learning approach which provides a scaffold for learners to develop their problem solving skills. The objective of having applied physics on a programme is to introduce students to the mathematical world. Today students view the world through smart devices. By incorporating student recorded videos into the laboratory experience the student can visualise the mathematical world. Sitting in a classroom learning about material properties does not easily facilitate an understanding of mathematical equations as mapping to a physical reality. In order to get the students motivated and immersed in the real mathematical and physical world, an approach which makes them think about the cause and effect of actions is used. Incorporating physical action with physics enables students to assimilate knowledge and adopt an action problem solving approach to the physics concept. This is an integrated approach that requires synthesis of information from various sources in order to accomplish the task. As a transferable skill, this will ensure that the material scientists will be visionary in their approach to real life problems.more » « less
-
In this work, we address the lack of 3D understanding of generative neural networks by introducing a persistent 3D feature embedding for view synthesis. To this end, we propose DeepVoxels, a learned representation that encodes the view-dependent appearance of a 3D scene without having to explicitly model its geometry. At its core, our approach is based on a Cartesian 3D grid of persistent embedded features that learn to make use of the underlying 3D scene structure. Our approach combines insights from 3D geometric computer vision with recent advances in learning image-to-image mappings based on adversarial loss functions. DeepVoxels is supervised, without requiring a 3D reconstruction of the scene, using a 2D re-rendering loss and enforces perspective and multi-view geometry in a principled manner. We apply our persistent 3D scene representation to the problem of novel view synthesis demonstrating high-quality results for a variety of challenging scenes.more » « less