skip to main content


Title: An Efficient Pipeline for Biophysical Modeling of Neurons
Automation of the process of developing biophysical conductance-based neuronal models involves the selection of numerous interacting parameters, making the overall process computationally intensive, complex, and often intractable. A recently reported insight about the possible grouping of currents into distinct biophysical modules associated with specific neurocomputational properties also simplifies the process of automated selection of parameters. The present paper adds a new current module to the previous report to design spike frequency adaptation and bursting characteristics, based on user specifications. We then show how our proposed grouping of currents into modules facilitates the development of a pipeline that automates the biophysical modeling of single neurons that exhibit multiple neurocomputational properties. The software will be made available for public download via our site cyneuro.org.  more » « less
Award ID(s):
1730655
NSF-PAR ID:
10311931
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2021 10th International IEEE/EMBS Conference on Neural Engineering (NER)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    We propose a model-based clustering method for high-dimensional longitudinal data via regularization in this paper. This study was motivated by the Trial of Activity in Adolescent Girls (TAAG), which aimed to examine multilevel factors related to the change of physical activity by following up a cohort of 783 girls over 10 years from adolescence to early adulthood. Our goal is to identify the intrinsic grouping of subjects with similar patterns of physical activity trajectories and the most relevant predictors within each group. The previous analyses conducted clustering and variable selection in two steps, while our new method can perform the tasks simultaneously. Within each cluster, a linear mixed-effects model (LMM) is fitted with a doubly penalized likelihood to induce sparsity for parameter estimation and effect selection. The large-sample joint properties are established, allowing the dimensions of both fixed and random effects to increase at an exponential rate of the sample size, with a general class of penalty functions. Assuming subjects are drawn from a Gaussian mixture distribution, model effects and cluster labels are estimated via a coordinate descent algorithm nested inside the Expectation-Maximization (EM) algorithm. Bayesian Information Criterion (BIC) is used to determine the optimal number of clusters and the values of tuning parameters. Our numerical studies show that the new method has satisfactory performance and is able to accommodate complex data with multilevel and/or longitudinal effects.

     
    more » « less
  2. Enzymatic pathways have evolved uniquely preferred protein expression stoichiometry in living cells, but our ability to predict the optimal abundances from basic properties remains underdeveloped. Here, we report a biophysical, first-principles model of growth optimization for core mRNA translation, a multi-enzyme system that involves proteins with a broadly conserved stoichiometry spanning two orders of magnitude. We show that predictions from maximization of ribosome usage in a parsimonious flux model constrained by proteome allocation agree with the conserved ratios of translation factors. The analytical solutions, without free parameters, provide an interpretable framework for the observed hierarchy of expression levels based on simple biophysical properties, such as diffusion constants and protein sizes. Our results provide an intuitive and quantitative understanding for the construction of a central process of life, as well as a path toward rational design of pathway-specific enzyme expression stoichiometry. 
    more » « less
  3. Abstract

    As part of its International Capabilities Assessment effort, the Community Coordinated Modeling Center initiated several working teams, one of which is focused on the validation of models and methods for determining auroral electrodynamic parameters, including particle precipitation, conductivities, electric fields, neutral density and winds, currents, Joule heating, auroral boundaries, and ion outflow. Auroral electrodynamic properties are needed as input to space weather models, to test and validate the accuracy of physical models, and to provide needed information for space weather customers and researchers. The working team developed a process for validating auroral electrodynamic quantities that begins with the selection of a set of events, followed by construction of ground truth databases using all available data and assimilative data analysis techniques. Using optimized, predefined metrics, the ground truth data for selected events can be used to assess model performance and improvement over time. The availability of global observations and sophisticated data assimilation techniques provides the means to create accurate ground truth databases routinely and accurately.

     
    more » « less
  4. Summary

    Forward stagewise estimation is a revived slow-brewing approach for model building that is particularly attractive in dealing with complex data structures for both its computational efficiency and its intrinsic connections with penalized estimation. Under the framework of generalized estimating equations, we study general stagewise estimation approaches that can handle clustered data and non-Gaussian/non-linear models in the presence of prior variable grouping structure. As the grouping structure is often not ideal in that even the important groups may contain irrelevant variables, the key is to simultaneously conduct group selection and within-group variable selection, that is, bi-level selection. We propose two approaches to address the challenge. The first is a bi-level stagewise estimating equations (BiSEE) approach, which is shown to correspond to the sparse group lasso penalized regression. The second is a hierarchical stagewise estimating equations (HiSEE) approach to handle more general hierarchical grouping structure, in which each stagewise estimation step itself is executed as a hierarchical selection process based on the grouping structure. Simulation studies show that BiSEE and HiSEE yield competitive model selection and predictive performance compared to existing approaches. We apply the proposed approaches to study the association between the suicide-related hospitalization rates of the 15–19 age group and the characteristics of the school districts in the State of Connecticut.

     
    more » « less
  5. We present a novel Packet Type (PT)-based design framework for the finite-length analysis of Device-to-Device (D2D) coded caching. By the exploitation of the asymmetry in the coded delivery phase, two fundamental forms of subpacketization reduction gain for D2D coded caching, i.e., the subfile saving gain and the further splitting saving gain, are identified in the PT framework. The proposed framework features a streamlined design process which uses several key concepts including user grouping, subfile and packet types, multicast group types, transmitter selection, local/global further splitting factor, and PT design as an integer optimization. In particular, based on a predefined user grouping, the subfile and multicast group types can be determined and the cache placement of the users can be correspondingly determined. In this stage, subfiles of certain types can be potentially excluded without being used in the designed caching scheme, which we refer to as subfile saving gain. In the delivery phase, by a careful selection of the transmitters within each type of multicast groups, a smaller number of packets that each subfile needs to be further split into can be achieved, leading to the further splitting saving gain. The joint effect of these two gains results in an overall subpacketization reduction compared to the Ji-Caire-Molisch (JCM) scheme [1]. Using the PT framework, a new class of D2D caching schemes is constructed with order reduction on subpacketization but the same rate when compared to the JCM scheme. 
    more » « less