skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on July 1, 2026

Title: Parvalbumin neurons and cortical coding of dynamic stimuli: a network model
Here, we propose a model for the mechanisms that underlie neuron responses in the auditory cortex. This study focuses on a cortical circuit involving excitatory and inhibitory (parvalbumin) neurons. Using physiologically relevant parameters in the proposed model network, we show that we can recreate observed results in live studies.  more » « less
Award ID(s):
2319321
PAR ID:
10617399
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
American Physiological Society
Date Published:
Journal Name:
Journal of Neurophysiology
Volume:
134
Issue:
1
ISSN:
0022-3077
Page Range / eLocation ID:
53 to 66
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Transfer learning is an effective technique for tuning a deep learning model when training data or computational resources are limited. Instead of training a new model from scratch, the parameters of an existing base model are adjusted for the new task. The accuracy of such a fine-tuned model depends on the suitability of the base model chosen. Model search automates the selection of such a base model by evaluating the suitability of candidate models for a specific task. This entails inference with each candidate model on task-specific data. With thousands of models available through model stores, the computational cost of model search is a major bottleneck for efficient transfer learning. In this work, we presentAlsatian, a novel model search system. Based on the observation that many candidate models overlap to a significant extent and following a careful bottleneck analysis, we propose optimization techniques that are applicable to many model search frameworks. These optimizations include: (i) splitting models into individual blocks that can be shared across models, (ii) caching of intermediate inference results and model blocks, and (iii) selecting a beneficial search order for models to maximize sharing of cached results. In our evaluation on state-of-the-art deep learning models from computer vision and natural language processing, we show thatAlsatianoutperforms baselines by up to 14x. 
    more » « less
  2. Etessami, Kousha; Feige, Uriel; Puppis, Gabriele (Ed.)
    Classical network design models, such as the Survivable Network Design problem (SNDP), are (partly) motivated by robustness to faults under the assumption that any subset of edges upto a specific number can fail. We consider non-uniform fault models where the subset of edges that fail can be specified in different ways. Our primary interest is in the flexible graph connectivity model [Adjiashvili, 2013; Adjiashvili et al., 2020; Adjiashvili et al., 2022; Boyd et al., 2023], in which the edge set is partitioned into safe and unsafe edges. Given parameters p,q ≥ 1, the goal is to find a cheap subgraph that remains p-connected even after the failure of q unsafe edges. We also discuss the bulk-robust model [Adjiashvili et al., 2015; Adjiashvili, 2015] and the relative survivable network design model [Dinitz et al., 2022]. While SNDP admits a 2-approximation [K. Jain, 2001], the approximability of problems in these more complex models is much less understood even in special cases. We make two contributions. Our first set of results are in the flexible graph connectivity model. Motivated by a conjecture that a constant factor approximation is feasible when p and q are fixed, we consider two special cases. For the s-t case we obtain an approximation ratio that depends only on p,q whenever p+q > pq/2 which includes (p,2) and (2,q) for all p,q ≥ 1. For the global connectivity case we obtain an O(q) approximation for (2,q), and an O(p) approximation for (p,2) and (p,3) for any p ≥ 1, and for (p,4) when p is even. These are based on an augmentation framework and decomposing the families of cuts that need to be covered into a small number of uncrossable families. Our second result is a poly-logarithmic approximation for a generalization of the bulk-robust model when the "width" of the given instance (the maximum number of edges that can fail in any particular scenario) is fixed. Via this, we derive corresponding approximations for the flexible graph connectivity model and the relative survivable network design model. We utilize a recent framework due to Chen et al. [Chen et al., 2022] that was designed for handling group connectivity. 
    more » « less
  3. In the context of supervised parametric models, we introduce the concept of e-values. An e-value is a scalar quantity that represents the proximity of the sampling distribution of parameter estimates in a model trained on a subset of features to that of the model trained on all features (i.e. the full model). Under general conditions, a rank ordering of e-values separates models that contain all essential features from those that do not. The e-values are applicable to a wide range of parametric models. We use data depths and a fast resampling-based algorithm to implement a feature selection procedure using e-values, providing consistency results. For a p-dimensional feature space, this procedure requires fitting only the full model and evaluating p + 1 models, as opposed to the traditional requirement of fitting and evaluating 2^p models. Through experiments across several model settings and synthetic and real datasets, we establish that the e-values method as a promising general alternative to existing model-specific methods of feature selection 
    more » « less
  4. We present a new semiparametric extension of the Fay-Herriot model, termed the agnostic Fay-Herriot model (AGFH), in which the sampling-level model is expressed in terms of an unknown general function [Formula: see text]. Thus, the AGFH model can express any distribution in the sampling model since the choice of [Formula: see text] is extremely broad. We propose a Bayesian modelling scheme for AGFH where the unknown function [Formula: see text] is assigned a Gaussian Process prior. Using a Metropolis within Gibbs sampling Markov Chain Monte Carlo scheme, we study the performance of the AGFH model, along with that of a hierarchical Bayesian extension of the Fay-Herriot model. Our analysis shows that the AGFH is an excellent modelling alternative when the sampling distribution is non-Normal, especially in the case where the sampling distribution is bounded. It is also the best choice when the sampling variance is high. However, the hierarchical Bayesian framework and the traditional empirical Bayesian framework can be good modelling alternatives when the signal-to-noise ratio is high, and there are computational constraints. AMS subject classification: 62D05; 62F15 
    more » « less
  5. Abstract Model reduction methods usually focus on the error performance analysis; however, in presence of uncertainties, it is important to analyze the robustness properties of the error in model reduction as well. This problem is particularly relevant for engineered biological systems that need to function in a largely unknown and uncertain environment. We give robustness guarantees for structured model reduction of linear and nonlinear dynamical systems under parametric uncertainties. We consider a model reduction problem where the states in the reduced model are a strict subset of the states of the full model, and the dynamics for all of the other states are collapsed to zero (similar to quasi‐steady‐state approximation). We show two approaches to compute a robustness guarantee metric for any such model reduction—a direct linear analysis method for linear dynamics and a sensitivity analysis based approach that also works for nonlinear dynamics. Using the robustness guarantees with an error metric and an input‐output mapping metric, we propose an automated model reduction method to determine the best possible reduced model for a given detailed system model. We apply our method for the (1) design space exploration of a gene expression system that leads to a new mathematical model that accounts for the limited resources in the system and (2) model reduction of a population control circuit in bacterial cells. 
    more » « less