skip to main content

Search for: All records

Creators/Authors contains: "Abbott, LF"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Memories are an important part of how we think, understand the world around us, and plan out future actions. In the brain, memories are thought to be stored in a region called the hippocampus. When memories are formed, neurons store events that occur around the same time together. This might explain why often, in the brains of animals, the activity associated with retrieving memories is not just a snapshot of what happened at a specific moment-- it can also include information about what the animal might experience next. This can have a clear utility if animals use memories to predict what they might experience next and plan out future actions. Mathematically, this notion of predictiveness can be summarized by an algorithm known as the successor representation. This algorithm describes what the activity of neurons in the hippocampus looks like when retrieving memories and making predictions based on them. However, even though the successor representation can computationally reproduce the activity seen in the hippocampus when it is making predictions, it is unclear what biological mechanisms underpin this computation in the brain. Fang et al. approached this problem by trying to build a model that could generate the same activity patterns computedmore »by the successor representation using only biological mechanisms known to exist in the hippocampus. First, they used computational methods to design a network of neurons that had the biological properties of neural networks in the hippocampus. They then used the network to simulate neural activity. The results show that the activity of the network they designed was able to exactly match the successor representation. Additionally, the data resulting from the simulated activity in the network fitted experimental observations of hippocampal activity in Tufted Titmice. One advantage of the network designed by Fang et al. is that it can generate predictions in flexible ways,. That is, it canmake both short and long-term predictions from what an individual is experiencing at the moment. This flexibility means that the network can be used to simulate how the hippocampus learns in a variety of cognitive tasks. Additionally, the network is robust to different conditions. Given that the brain has to be able to store memories in many different situations, this is a promising indication that this network may be a reasonable model of how the brain learns. The results of Fang et al. lay the groundwork for connecting biological mechanisms in the hippocampus at the cellular level to cognitive effects, an essential step to understanding the hippocampus, as well as its role in health and disease. For instance, their network may provide a concrete approach to studying how disruptions to the ways neurons make and break connections can impair memory formation. More generally, better models of the biological mechanisms involved in making computations in the hippocampus can help scientists better understand and test out theories about how memories are formed and stored in the brain.« less
    Free, publicly-accessible full text available March 16, 2024
  2. Soltani, Alireza (Ed.)
    Cortical circuits generate excitatory currents that must be cancelled by strong inhibition to assure stability. The resulting excitatory-inhibitory (E-I) balance can generate spontaneous irregular activity but, in standard balanced E-I models, this requires that an extremely strong feedforward bias current be included along with the recurrent excitation and inhibition. The absence of experimental evidence for such large bias currents inspired us to examine an alternative regime that exhibits asynchronous activity without requiring unrealistically large feedforward input. In these networks, irregular spontaneous activity is supported by a continually changing sparse set of neurons. To support this activity, synaptic strengths must be drawn from high-variance distributions. Unlike standard balanced networks, these sparse balance networks exhibit robust nonlinear responses to uniform inputs and non-Gaussian input statistics. Interestingly, the speed, not the size, of synaptic fluctuations dictates the degree of sparsity in the model. In addition to simulations, we provide a mean-field analysis to illustrate the properties of these networks.
  3. We developed a neural network model that can account for major elements common to human focal seizures. These include the tonic-clonic transition, slow advance of clinical semiology and corresponding seizure territory expansion, widespread EEG synchronization, and slowing of the ictal rhythm as the seizure approaches termination. These were reproduced by incorporating usage-dependent exhaustion of inhibition in an adaptive neural network that receives global feedback inhibition in addition to local recurrent projections. Our model proposes mechanisms that may underline common EEG seizure onset patterns and status epilepticus, and postulates a role for synaptic plasticity in the emergence of epileptic foci. Complex patterns of seizure activity and bi-stable seizure end-points arise when stochastic noise is included. With the rapid advancement of clinical and experimental tools, we believe that this model can provide a roadmap and potentially an in silico testbed for future explorations of seizure mechanisms and clinical therapies.
  4. Appropriate generalization of learned responses to new situations is vital for adaptive behavior. We provide a circuit-level account of generalization in the electrosensory lobe (ELL) of weakly electric mormyrid fish. Much is already known in this system about a form of learning in which motor corollary discharge signals cancel responses to the uninformative input evoked by the fish’s own electric pulses. However, for this cancellation to be useful under natural circumstances, it must generalize accurately across behavioral regimes, specifically different electric pulse rates. We show that such generalization indeed occurs in ELL neurons, and develop a circuit-level model explaining how this may be achieved. The mechanism involves regularized synaptic plasticity and an approximate matching of the temporal dynamics of motor corollary discharge and electrosensory inputs. Recordings of motor corollary discharge signals in mossy fibers and granule cells provide direct evidence for such matching.
  5. Different coding strategies are used to represent odor information at various stages of the mammalian olfactory system. A temporal latency code represents odor identity in olfactory bulb (OB), but this temporal information is discarded in piriform cortex (PCx) where odor identity is instead encoded through ensemble membership. We developed a spiking PCx network model to understand how this transformation is implemented. In the model, the impact of OB inputs activated earliest after inhalation is amplified within PCx by diffuse recurrent collateral excitation, which then recruits strong, sustained feedback inhibition that suppresses the impact of later-responding glomeruli. We model increasing odor concentrations by decreasing glomerulus onset latencies while preserving their activation sequences. This produces a multiplexed cortical odor code in which activated ensembles are robust to concentration changes while concentration information is encoded through population synchrony. Our model demonstrates how PCx circuitry can implement multiplexed ensemble-identity/temporal-concentration odor coding.