Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract This double interview with two distinguished researchers in computational neuroscience, Kanaka Rajan and Alessandro Treves, aims to capture a part of their talks and discussions that emerged during a workshop on physical modelling of thought, held in Berlin in January 2023. The topic is the fascinating all-round intersection of physics and neuroscience through the perspectives of the interviewees. The dialogue traverses the complex terrain of modelling thought processes, shedding light on the trade-off between simplicity and complexity that defines the field of computational neuroscience. From the early days of physics-inspired brain models to the cutting-edge advancements in large language models, the interviewees share their journey, challenges, and insights into the modelling of physical and biological systems; they recount their experience with computational neuroscience, explore the impact of large language models on our understanding of human language and cognition, and speculate on the future directions of physics-inspired computational neuroscience, emphasising the importance of interdisciplinary collaboration and a deeper integration of complexity and detail in modelling the brain and its functions.more » « lessFree, publicly-accessible full text available September 1, 2026
-
Free, publicly-accessible full text available October 28, 2026
-
Free, publicly-accessible full text available October 28, 2026
-
Biological agents do not have infinite resources to learn new things. For this reason, a central aspect of human learning is the ability to recycle previously acquired knowledge in a way that allows for faster, less resource-intensive acquisition of new skills. In spite of that, how neural networks in the brain leverage existing knowledge to learn new computations is not well understood. In this work, we study this question in artificial recurrent neural networks (RNNs) trained on a corpus of commonly used neuroscience tasks. Combining brain-inspired inductive biases we call functional and structural, we propose a system that learns new tasks by building on top of pre-trained latent dynamics organised into separate recurrent modules. These modules, acting as prior knowledge acquired previously through evolution or development, are pre-trained on the statistics of the full corpus of tasks so as to be independent and maximally informative. The resulting model, we call a Modular Latent Primitives (MoLaP) network, allows for learning multiple tasks while keeping parameter counts, and updates, low. We also show that the skills acquired with our approach are more robust to a broad range of perturbations compared to those acquired with other multi-task learning strategies, and that generalisation to new tasks is facilitated. This work offers a new perspective on achieving efficient multi-task learning in the brain, illustrating the benefits of leveraging pre-trained latent dynamical primitives.more » « less
An official website of the United States government

Full Text Available