skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Analysis of sloppiness in model simulations: Unveiling parameter uncertainty when mathematical models are fitted to data
This work introduces a comprehensive approach to assess the sensitivity of model outputs to changes in parameter values, constrained by the combination of prior beliefs and data. This approach identifies stiff parameter combinations strongly affecting the quality of the model-data fit while simultaneously revealing which of these key parameter combinations are informed primarily by the data or are also substantively influenced by the priors. We focus on the very common context in complex systems where the amount and quality of data are low compared to the number of model parameters to be collectively estimated, and showcase the benefits of this technique for applications in biochemistry, ecology, and cardiac electrophysiology. We also show how stiff parameter combinations, once identified, uncover controlling mechanisms underlying the system being modeled and inform which of the model parameters need to be prioritized in future experiments for improved parameter inference from collective model-data fitting.  more » « less
Award ID(s):
1715342
PAR ID:
10383493
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ;
Date Published:
Journal Name:
Science advances
Volume:
8
ISSN:
2375-2548
Page Range / eLocation ID:
eabm5952
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Complex models in physics, biology, economics, and engineering are often sloppy , meaning that the model parameters are not well determined by the model predictions for collective behavior. Many parameter combinations can vary over decades without significant changes in the predictions. This review uses information geometry to explore sloppiness and its deep relation to emergent theories. We introduce the model manifold of predictions, whose coordinates are the model parameters. Its hyperribbon structure explains why only a few parameter combinations matter for the behavior. We review recent rigorous results that connect the hierarchy of hyperribbon widths to approximation theory, and to the smoothness of model predictions under changes of the control variables. We discuss recent geodesic methods to find simpler models on nearby boundaries of the model manifold—emergent theories with fewer parameters that explain the behavior equally well. We discuss a Bayesian prior which optimizes the mutual information between model parameters and experimental data, naturally favoring points on the emergent boundary theories and thus simpler models. We introduce a ‘projected maximum likelihood’ prior that efficiently approximates this optimal prior, and contrast both to the poor behavior of the traditional Jeffreys prior. We discuss the way the renormalization group coarse-graining in statistical mechanics introduces a flow of the model manifold, and connect stiff and sloppy directions along the model manifold with relevant and irrelevant eigendirections of the renormalization group. Finally, we discuss recently developed ‘intensive’ embedding methods, allowing one to visualize the predictions of arbitrary probabilistic models as low-dimensional projections of an isometric embedding, and illustrate our method by generating the model manifold of the Ising model. 
    more » « less
  2. Parameter estimation from observable or experimental data is a crucial stage in any modeling study. Identifiability refers to one’s ability to uniquely estimate the model parameters from the available data. Structural unidentifiability in dynamic models, the opposite of identifiability, is associated with the notion of degeneracy where multiple parameter sets produce the same pattern. Therefore, the inverse function of determining the model parameters from the data is not well defined. Degeneracy is not only a mathematical property of models, but it has also been reported in biological experiments. Classical studies on structural unidentifiability focused on the notion that one can at most identify combinations of unidentifiable model parameters. We have identified a different type of structural degeneracy/unidentifiability present in a family of models, which we refer to as the Lambda-Omega (Λ-Ω) models. These are an extension of the classical lambda-omega (λ-ω) models that have been used to model biological systems, and display a richer dynamic behavior and waveforms that range from sinusoidal to square wave to spike like. We show that the Λ-Ω models feature infinitely many parameter sets that produce identical stable oscillations, except possible for a phase shift (reflecting the initial phase). These degenerate parameters are not identifiable combinations of unidentifiable parameters as is the case in structural degeneracy. In fact, reducing the number of model parameters in the Λ-Ω models is minimal in the sense that each one controls a different aspect of the model dynamics and the dynamic complexity of the system would be reduced by reducing the number of parameters. We argue that the family of Λ-Ω models serves as a framework for the systematic investigation of degeneracy and identifiability in dynamic models and for the investigation of the interplay between structural and other forms of unidentifiability resulting on the lack of information from the experimental/observational data. 
    more » « less
  3. Multiscale systems biology is having an increasingly powerful impact on our understanding of the interconnected molecular, cellular, and microenvironmental drivers of tumor growth and the effects of novel drugs and drug combinations for cancer therapy. Agent-based models (ABMs) that treat cells as autonomous decision-makers, each with their own intrinsic characteristics, are a natural platform for capturing intratumoral heterogeneity. Agent-based models are also useful for integrating the multiple time and spatial scales associated with vascular tumor growth and response to treatment. Despite all their benefits, the computational costs of solving agent-based models escalate and become prohibitive when simulating millions of cells, making parameter exploration and model parameterization from experimental data very challenging. Moreover, such data are typically limited, coarse-grained and may lack any spatial resolution, compounding these challenges. We address these issues by developing a first-of-its-kind method that leverages explicitly formulated surrogate models (SMs) to bridge the current computational divide between agent-based models and experimental data. In our approach, Surrogate Modeling for Reconstructing Parameter Surfaces (SMoRe ParS), we quantify the uncertainty in the relationship between agent-based model inputs and surrogate model parameters, and between surrogate model parameters and experimental data. In this way, surrogate model parameters serve as intermediaries between agent-based model input and data, making it possible to use them for calibration and uncertainty quantification of agent-based model parameters that map directly onto an experimental data set. We illustrate the functionality and novelty of Surrogate Modeling for Reconstructing Parameter Surfaces by applying it to an agent-based model of 3D vascular tumor growth, and experimental data in the form of tumor volume time-courses. Our method is broadly applicable to situations where preserving underlying mechanistic information is of interest, and where computational complexity and sparse, noisy calibration data hinder model parameterization. 
    more » « less
  4. This paper describes a geometric approach to parameter identifiability analysis in models of power systems dynamics. When a model of a power system is to be compared with measurements taken at discrete times, it can be interpreted as a mapping from parameter space into a data or prediction space. Generically, model mappings can be interpreted as manifolds with dimensionality equal to the number of structurally identifiable parameters. Empirically it is observed that model mappings often correspond to bounded manifolds. We propose a new definition of practical identifiability based the topological definition of a manifold with boundary. In many ways, our proposed definition extends the properties of structural identifiability. We construct numerical approximations to geodesics on the model manifold and use the results, combined with insights derived from the mathematical form of the equations, to identify combinations of practically identifiable and unidentifiable parameters. We give several examples of application to dynamic power systems models. 
    more » « less
  5. Algorithms often have tunable parameters that impact performance metrics such as runtime and solution quality. For many algorithms used in practice, no parameter settings admit meaningful worst-case bounds, so the parameters are made available for the user to tune. Alternatively, parameters may be tuned implicitly within the proof of a worst-case approximation ratio or runtime bound. Worst-case instances, however, may be rare or nonexistent in practice. A growing body of research has demonstrated that a data-driven approach to parameter tuning can lead to significant improvements in performance. This approach uses atraining setof problem instances sampled from an unknown, application-specific distribution and returns a parameter setting with strong average performance on the training set. We provide techniques for derivinggeneralization guaranteesthat bound the difference between the algorithm’s average performance over the training set and its expected performance on the unknown distribution. Our results apply no matter how the parameters are tuned, be it via an automated or manual approach. The challenge is that for many types of algorithms, performance is a volatile function of the parameters: slightly perturbing the parameters can cause a large change in behavior. Prior research [e.g.,12,16,20,62] has proved generalization bounds by employing case-by-case analyses of greedy algorithms, clustering algorithms, integer programming algorithms, and selling mechanisms. We streamline these analyses with a general theorem that applies whenever an algorithm’s performance is a piecewise-constant, piecewise-linear, or—more generally—piecewise-structuredfunction of its parameters. Our results, which are tight up to logarithmic factors in the worst case, also imply novel bounds for configuring dynamic programming algorithms from computational biology. 
    more » « less