skip to main content


Search for: All records

Award ID contains: 2139936

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Animals flexibly select actions that maximize future rewards despite facing uncertainty in sen- sory inputs, action-outcome associations or contexts. The computational and circuit mechanisms underlying this ability are poorly understood. A clue to such computations can be found in the neural systems involved in representing sensory features, sensorimotor-outcome associations and contexts. Specifically, the basal ganglia (BG) have been implicated in forming sensorimotor-outcome association [1] while the thalamocortical loop between the prefrontal cortex (PFC) and mediodorsal thalamus (MD) has been shown to engage in contextual representations [2, 3]. Interestingly, both human and non-human animal experiments indicate that the MD represents different forms of uncertainty [3, 4]. However, finding evidence for uncertainty representation gives little insight into how it is utilized to drive behavior. Normative theories have excelled at providing such computational insights. For example, de- ploying traditional machine learning algorithms to fit human decision-making behavior has clarified how associative uncertainty alters exploratory behavior [5, 6]. However, despite their computa- tional insight and ability to fit behaviors, normative models cannot be directly related to neural mechanisms. Therefore, a critical gap exists between what we know about the neural representa- tion of uncertainty on one end and the computational functions uncertainty serves in cognition. This gap can be filled with mechanistic neural models that can approximate normative models as well as generate experimentally observed neural representations. In this work, we build a mechanistic cortico-thalamo-BG loop network model that directly fills this gap. The model includes computationally-relevant mechanistic details of both BG and thalamocortical circuits such as distributional activities of dopamine [7] and thalamocortical pro- jection modulating cortical effective connectivity [3] and plasticity [8] via interneurons. We show that our network can more efficiently and flexibly explore various environments compared to com- monly used machine learning algorithms and we show that the mechanistic features we include are crucial for handling different types of uncertainty in decision-making. Furthermore, through derivation and mathematical proofs, we approximate our models to two novel normative theories. We show mathematically the first has near-optimal performance on bandit tasks. The second is a generalization on the well-known CUMSUM algorithm, which is known to be optimal on single change point detection tasks [9]. Our normative model expands on this by detecting multiple sequential contextual changes. To our knowledge, our work is the first to link computational in- sights, normative models and neural realization together in decision-making under various forms of uncertainty. 
    more » « less
    Free, publicly-accessible full text available February 18, 2025
  2. Free, publicly-accessible full text available November 16, 2024
  3. Humans and other animals can maintain constant payoffs in an uncertain environment by steadily re-evaluating and flexibly adjusting current strategy, which largely depends on the interactions between the prefrontal cortex (PFC) and mediodorsal thalamus (MD). While the ventromedial PFC (vmPFC) represents the level of uncertainty (i.e., prior belief about external states), it remains unclear how the brain recruits the PFC-MD network to re-evaluate decision strategy based on the uncertainty. Here, we leverage non-linear dynamic causal modeling on fMRI data to test how prior belief-dependent activity in vmPFC gates the information flow in the PFC-MD network when individuals switch their decision strategy. We show that the prior belief-related responses in vmPFC had a modulatory influence on the connections from dorsolateral PFC (dlPFC) to both, lateral orbitofrontal (lOFC) and MD. Bayesian parameter averaging revealed that only the connection from the dlPFC to lOFC surpassed the significant threshold, which indicates that the weaker the prior belief, the less was the inhibitory influence of the vmPFC on the strength of effective connections from dlPFC to lOFC. These findings suggest that the vmPFC acts as a gatekeeper for the recruitment of processing resources to re-evaluate the decision strategy in situations of high uncertainty. 
    more » « less
    Free, publicly-accessible full text available November 1, 2024
  4. This study proposes a novel dynamical mechanism for pattern recognition discovered by interpreting a recurrent neural network (RNN) trained on a simple task inspired by the SET card game. We interpreted the trained RNN as recognizing patterns via phase shifts in a low-dimensional limit cycle in a manner analogous to transitions in a finite state automaton (FSA). We further validated this interpretation by handcrafting a simple oscillatory model that reproduces the dynamics of the trained RNN. Our findings not only suggest of a potential dynamical mechanism capable of pattern recognition, but also suggest of a potential neural implementation of FSA. Above all, this work contributes to the growing discourse on deep learning model interpretability. 
    more » « less
    Free, publicly-accessible full text available August 19, 2024
  5. Decision making in natural settings requires efficient exploration to handle uncertainty. Since associations between actions and outcomes are uncertain, animals need to balance the explorations and exploitation to select the actions that lead to maximal rewards. The computa- tional principles by which animal brains explore during decision-making are poorly understood. Our challenge here was to build a biologically plausible neural network that efficiently explores an environment and understands its effectiveness mathematically. One of the most evolutionarily conserved and important systems in decision making is basal ganglia (BG)1. In particular, the dopamine activities (DA) in BG is thought to represent reward prediction error (RPE) to facilitate reinforcement learning2. Therefore, our starting point is a cortico-BG loop motif3. This network adjusts exploration based on neuronal noises and updates its value estimate through RPE. To account for the fact that animals adjust exploration based on experience, we modified the network in two ways. First, it is recently discovered that DA does not simply represent the scalar RPE value; rather it represents RPE in a distribution4. We incorporated the distributional RPE framework and further the hypothesis, allowing an RPE distribution to update the posterior of action values encoded by cortico-BG connections. Second, it is known that the firing in the layer 2/3 of cortex fires is variable and sparse5. Our network thus included a random sparsification of cortical activity as a mechanism of sampling from this posterior for experience-based exploration. Combining these two features, our network is able to take the uncertainty of our value estimates into account to accomplish efficient exploration in a variety of environments. 
    more » « less
    Free, publicly-accessible full text available June 23, 2024
  6. Due to the increasing complexity of robot swarm algorithms, ana- lyzing their performance theoretically is often very difficult. Instead, simulators are often used to benchmark the performance of robot swarm algorithms. However, we are not aware of simulators that take advantage of the naturally highly parallel nature of distributed robot swarms. This paper presents ParSwarm, a parallel C++ frame- work for simulating robot swarms at scale on multicore machines. We demonstrate the power of ParSwarm by implementing two applications, task allocation and density estimation, and running simulations on large numbers of agents. 
    more » « less
    Free, publicly-accessible full text available June 19, 2024
  7. We continue our study from [5], of how concepts that have hierarchical structure might be represented in brain-like neural networks, how these representations might be used to recognize the concepts, and how these representations might be learned. In [5], we considered simple tree-structured concepts and feed-forward layered networks. Here we extend the model in two ways: we allow limited overlap between children of different concepts, and we allow networks to include feedback edges. For these more general cases, we describe and analyze algorithms for recognition and algorithms for learning. 
    more » « less
    Free, publicly-accessible full text available June 6, 2024
  8. Task allocation is an important problem for robot swarms to solve, allowing agents to reduce task completion time by performing tasks in a distributed fashion. Existing task allocation algorithms often assume prior knowledge of task location and demand or fail to consider the effects of the geometric distribution of tasks on the completion time and communication cost of the algorithms. In this paper, we examine an environment where agents must explore and discover tasks with positive demand and successfully assign themselves to complete all such tasks. We first provide a new discrete general model for modeling swarms. Operating within this theoretical framework, we propose two new task allocation algorithms for initially unknown environments – one based on N-site selection and the other on virtual pheromones. We analyze each algorithm separately and also evaluate the effectiveness of the two algorithms in dense vs. sparse task distributions. Compared to the Levy walk, which has been theorized to be optimal for foraging, our virtual pheromone inspired algorithm is much faster in sparse to medium task densities but is communication and agent intensive. Our site selection inspired algorithm also outperforms Levy walk in sparse task densities and is a less resource-intensive option than our virtual pheromone algorithm for this case. Because the performance of both algorithms relative to random walk is dependent on task density, our results shed light on how task density is important in choosing a task allocation algorithm in initially unknown environments. 
    more » « less
    Free, publicly-accessible full text available May 29, 2024
  9. Animal brains evolved to optimize behavior in dynamic environments, flexibly selecting actions that maximize future rewards in different contexts. A large body of experimental work indicates that such optimization changes the wiring of neural circuits, appropriately mapping environmental input onto behavioral outputs. A major unsolved scientific question is how optimal wiring adjustments, which must target the connections responsible for rewards, can be accomplished when the relation between sensory inputs, action taken, environmental context with rewards is ambiguous. The credit assignment problem can be categorized into context-independent structural credit assignment and context-dependent continual learning. In this perspective, we survey prior approaches to these two problems and advance the notion that the brain’s specialized neural architectures provide efficient solutions. Within this framework, the thalamus with its cortical and basal ganglia interactions serves as a systems-level solution to credit assignment. Specifically, we propose that thalamocortical interaction is the locus of meta-learning where the thalamus provides cortical control functions that parametrize the cortical activity association space. By selecting among these control functions, the basal ganglia hierarchically guide thalamocortical plasticity across two timescales to enable meta-learning. The faster timescale establishes contextual associations to enable behavioral flexibility while the slower one enables generalization to new contexts. 
    more » « less
  10. Neuromorphic computing would benefit from the utilization of improved customized hardware. However, the translation of neuromorphic algorithms to hardware is not easily accomplished. In particular, building superconducting neuromorphic systems requires expertise in both superconducting physics and theoretical neuroscience, which makes such design particularly challenging. In this work, we aim to bridge this gap by presenting a tool and methodology to translate algorithmic parameters into circuit specifications. We first show the correspondence between theoretical neuroscience models and the dynamics of our circuit topologies. We then apply this tool to solve a linear system and implement Boolean logic gates by creating spiking neural networks with our superconducting nanowire-based hardware. 
    more » « less