skip to main content


Search for: All records

Creators/Authors contains: "Siegelmann, Hava T."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Replay is the reactivation of one or more neural patterns that are similar to the activation patterns experienced during past waking experiences. Replay was first observed in biological neural networks during sleep, and it is now thought to play a critical role in memory formation, retrieval, and consolidation. Replay-like mechanisms have been incorporated in deep artificial neural networks that learn over time to avoid catastrophic forgetting of previous knowledge. Replay algorithms have been successfully used in a wide range of deep learning methods within supervised, unsupervised, and reinforcement learning paradigms. In this letter, we provide the first comprehensive comparison between replay in the mammalian brain and replay in artificial neural networks. We identify multiple aspects of biological replay that are missing in deep learning systems and hypothesize how they could be used to improve artificial neural networks. 
    more » « less
  2. Abstract

    Brains demonstrate varying spatial scales of nested hierarchical clustering. Identifying the brain’s neuronal cluster size to be presented as nodes in a network computation is critical to both neuroscience and artificial intelligence, as these define the cognitive blocks capable of building intelligent computation. Experiments support various forms and sizes of neural clustering, from handfuls of dendrites to thousands of neurons, and hint at their behavior. Here, we use computational simulations with a brain-derived fMRI network to show that not only do brain networks remain structurally self-similar across scales but also neuron-like signal integration functionality (“integrate and fire”) is preserved at particular clustering scales. As such, we propose a coarse-graining of neuronal networks to ensemble-nodes, with multiple spikes making up its ensemble-spike and time re-scaling factor defining its ensemble-time step. This fractal-like spatiotemporal property, observed in both structure and function, permits strategic choice in bridging across experimental scales for computational modeling while also suggesting regulatory constraints on developmental and evolutionary “growth spurts” in brain size, as per punctuated equilibrium theories in evolutionary biology.

     
    more » « less
  3. The prefrontal cortex encodes and stores numerous, often disparate, schemas and flexibly switches between them. Recent research on artificial neural networks trained by reinforcement learning has made it possible to model fundamental processes underlying schema encoding and storage. Yet how the brain is able to create new schemas while preserving and utilizing old schemas remains unclear. Here we propose a simple neural network framework that incorporates hierarchical gating to model the prefrontal cortex’s ability to flexibly encode and use multiple disparate schemas. We show how gating naturally leads to transfer learning and robust memory savings. We then show how neuropsychological impairments observed in patients with prefrontal damage are mimicked by lesions of our network. Our architecture, which we call DynaMoE, provides a fundamental framework for how the prefrontal cortex may handle the abundance of schemas necessary to navigate the real world.

     
    more » « less